It contains the advice or opinions of one or more Wikipedia contributors. This page is not an encyclopedia article, nor is it one of Wikipedia's policies or guidelines, as it has not been thoroughly vetted by the community. Some essays represent widespread norms; others only represent minority viewpoints.
When I am relatively happy with a section, it will be "locked" with the slight green background. All other sections are a work in progress and may not represent all or even most of what I intended to say in those matters. Each section has a corresponding discussion section; feel free to suggest improvements there.
Over the years, I've learned several interesting things about Wikipedia. Let me share them with you (but please note a lot of this reflects my views as of 2010-2011; some of the essays below may need a little update...):
Somewhere around 2008 I noticed a problem with how WP:3RR and edit warring are dealt with.
Consider the following case: Users A and B disagree about content. Discussion (and even some dispute resolution procedures) have been tried (on this article or other(s)), and the users still cannot agree. An edit war ensues. User A makes 3 reverts, User B makes 4 reverts. Will user B report user A to 3RR? Of course not. Will user A report user B to 3RR? Perhaps, but there are admins who will judge them both guilty of edit warring and block them both. Thus the user A who did not break the 3RR risks receiving the same penalty as the user B who did. User A cannot win edit war against user B (because he is not willing to break the 3RR) and the article version stays in user B version.
Reverting less, by itself, will not do much good. Reporting user B after 3 reverts is not going to achieve anything, of course (since 3 reverts are allowed...), and even with two reverts, user A may be accused of edit warring.
Unless our User A wants to go for a dual permblock (in other words, kamikaze himself on of many edit warriors - users B's - he has to deal with), or slowly see his reputation ruined overtime, his only option is to give up on the article(s) and let revert warriors win, at least temporarily (thus lowering quality of Wikipedia, whose articles were compromised by edit warrior(s)).
Of course, users A and B don't operate in a vacuum. One would expect that community will get involved and stop the edit warrior. But it is not always the case. When articles targeted are low key articles, with few or no editors watchlisting them (so nobody but users A and B is active in them), the bazaar principle that many eyes weed out the bugs (including disruptive edit warriors) fails to kick in. Sure - there are dispute resolution procedures, which allow user A to ask for those extra eyes. They are however lengthy and will require user to devote time to explaining why a 3RR violator should be stopped (one would think this would be obvious), and thus user A will not create content/police other articles/do other constructive things. Not to mention that in most dispute resolutions, neutral editors (RfC and noticeboards commentators, mediators, etc.) rarely involve themselves in an edit war; they may agree on talk with user A - but what good is it if user B keeps disagreeing with everyone and reverting? Sure, editor A could try escalating the dispute resolution to ArbCom (allowing the edit warrior to compromise x articles for months, before ArbCom ruling, and assuming the edit warrior is active enough to warrant ArbCom attention); he can ask more editors for input and involvement in the article (leading to accusations that he is forum shopping/canvassing and often failing to attract attention to anything but himself anyway, particularly when organized teams of edit warriors (WP:TAGTEAM) step in), or engage in edit warring and hope that 3RR will be interpreted correctly. Or, of course, he can just give up.
This gets even worse if user A is an estabilished editor who does not want to have his block record tarnished with blocks for edit warring. Even more so if user A is active enough to deal with several user B's on various articles (so he may have to deal with several dispute resolutions); multiply this by long history on Wikipedia and you have user A who tries to ensure quality in many articles, defending them from periodical edit warriors, but who can be easily portrayed in bad faith as a long-term edit warrior.
In other words, there is an increasingly dangerous interpretation of EDITWAR ("it takes two to tango so both are equally guilty") that replaces clear interpretation of 3RR ("one reverts 4 times and he is guilty"), In effect, this penalizes users who try to stop edit warriors and empowers hardcore edit warriors - who either will not get reported and will succeed in pushing their version, or who at least will take the user who tried to stop them down (getting them blocked or at least tarnishing their reputation). Edit warriors have also much less reputation at stake than their opponents (who are more likely to be well respected and even be admins), are much more likely to be anonymous, and are in a much better position to abandon their tarnished old account and create a new one (particularly if they time their name change act so that their old account is not blocked). The incentive is increasingly high on "let disruptive edit warrior, user B, do whatever he wants, he is not worth my time and stress".
Of course, edit warring is bad, there is no disagreeing with that. But it is inevitable it will occur, due to edit warriors (particularly the "true believers" I discuss in the following section). It does not take two edit warriors for an edit war to occur - this is a common misconception. It takes one edit warrior, and a good editor who is not willing to let the edit warrior disrupt the article. If nobody breaks 3RR, but keeps at 3 reverts, this is when articles should get protected, and dispute resolution engaged, there is no other choice. But if one editor breaks the 3RR and the other doesn't, the solution - and difference - is simple.
Compromise minded editors wind up in an environment where any attempts at compromise get taken advantage of. As a result they leave the project or adopt a different strategy. Edit warriors thrive, EVEN THOUGH, they themselves would prefer a compromise solution.
”
Solution: enforce simple 3RR, not vaguely interpreted EDITWAR. 3RR was created so that one would not have to go through lengthy procedures in order to stop obvious edit warriors. 3RR draws a simple and clear distinction between a user still inside wiki editing policies and the one outside. Invoking EDITWAR for both sides, when only one side violated 3RR, creates confusion, making 3RR increasingly useless (as fewer edit warriors will get reported and thus edit war will continue), unpredictably unfair (random in outcome, as various admins pay varying attention to EDITWAR), and worst of all, against the spirit of our project, allowing edit warriors to win content disputes and have their version stabilized on Wikipedia. EDITWAR should be used to penalize both sides only if neither has violated 3RR and if both failed to seek a dispute resolution, and kept edit warring. If both sides violated 3RR, block both.
If an editor thinks he is truly neutral, and has no POV, he is not only violating WP:NPOV (which clearly states that all editors have a POV: "All editors and all sources have biases"), but he is likely to refuse to ever compromise over content ("because he is not biased, on the contrary, he is completely neutral, right and represents the truth"). One cannot reason with such a user (one can try, but one will always fail). Let's call such users "true believers" for the purpose of further discussion. There are also editors, known as "POV pushers", who likely realize they have some POV, but believe it is the "correct" one. For this discussion, there is little difference between the "POV pushers" and the "true believers", as their actions and consequences are little different (besides, few "POV pushers" will admit they have a POV, so in effect they claim to be "true believers" anyway).
"True believers" will commonly edit war, and force those who disagree with them to edit war: since "true believers" cannot be made to change their mind on talk (mówił dziad do obrazu...), the other side will have little recourse but to deal with them in article mainspace, and of course the "true believers" will defend their changes in article space (since they find it hard to let their "truth" be erased). The more impulsive of them commonly have significant 3RR block histories, others - just long histories of edit warring.
"True believers" have a lot of bad faith: since they are obviously "right", they believe their opponents are "wrong". They will often discuss and criticize other editors, in more or less elaborate fashion, creating wiki battlegrounds. They will not shrink and will often start dispute resolution procedures, since they believe they are "right"; they will be shocked when the community fails to see their "truth" - which will often result in claims that others (mediators, arbcom, etc.) are biased and part of the "evil cabal".
Often, good, reasonable editors will give up or leave because they find dealing with "true believers" too stressful: "true believers" can't be convinced that they are not "100% right", they will edit war in defense of their claims, and they will accuse their opponents of various wrongdoings. Dealing with them is just the pain in the three letters.
Do note that "true believers" may edit in non-controversial content areas (either ones unrelated to their "true belief" or ones where it doesn't matter), contribute good content in them and be civil to editors they meet there. They are not trolls, damaging all they touch: their content disruption is much more selective and much more difficult to spot by a neutral editors, unfamiliar with the content issue (albeit the neutrals will likely pick up on "true believers" habitual edit warring and incivility in their "true belief" areas).
In my experience, "true believers" are most common in articles related to national histories, and modern politics, roughly corresponding to nationalism. Presumably religion is similarly plagued, but it's not a set of topics I edit.
Test to detect a "true believer": in a given content area, lot's of arguing/edit warring/incivility, little to no peer-reviewed (GA/A/FA) content created.
) How to avoid becoming a true believer: realizing what POVs one has is crucial. Admitting them is difficult but is also important, as is a willingness to negotiate around them and compromise when necessary. Remember: NPOV requires all significant POVs to be represented. That does include ones you may disagree with, albeit due weight is important. One has to realize one's POV, don't be afraid to admit to it publicly, try to understand the other side and try to reach a compromise. And remember: compromise is sometimes known as the situation where everyone is just as unhappy :) Learn to live with it. And never, ever, scoop down to becoming a "true believer" yourself.
) "True believers", once identified, should be banned. If one is not willing to compromise, hundred or so wikis with declared POV (like Conservapedia) are that'a'way.
) In lieu of banning, which is not that easy to achieve, the best way to deal with the "true believers" is - just like with any vandalism - to keep reverting their vandalisms from the article. Sooner or later, they should get the point that their POVed edits are both unwelcome and useless on Wikipedia. That said, since as discussed above, they are not exactly trolls, it is imperative to be civil and try to assume good faith in dealing with them - even if it will not cause them to respect you, it will avoid confusing neutral observers as to who has the high moral ground here.
Disclaimer: Do note that in this essay I am not advocating, endorsing or supporting assuming bad faith (I fully support assuming good faith all the time!), I am simply explaining (but not excusing) why assuming bad faith happens.
Most of us come to Wikipedia as good faithed, but naive, editors. Wikipedia was built on good faith. Over the time, we realize the depth of wikipolitics, the ulterior motives of some editors, and we grow more cynical and assume less good faith. It's a sad story, but one that simply parallels the real life: growing out of childhood and teenage idealism, and moving into the adult world of realpolitik.
How badly you'll be hurt by the wikiworld, depends, just as in real life, on where do you come from and where do you live (edit). You'll be exposed to more radical views and bad faith if you live in the Isreali-Palestinian disputed areas than if you live on a peaceful farm in Canada. If you edit rarely visited, uncontroversial Wikipedia articles (about your local town, or uncontroversial, obscure science) you'll have a more positive experience than if you deal with articles about abortion, global warming or the Holocaust.
The more one runs into highly POVed users ("true believers" are the worst), who tend to cluster in the popular and/or controversial articles, the more likely one will slowly radicalize against their POV (the or is important, as some controversial articles are very unpopular - little known facts of Polish-Lithuanian history, usually kept alive by extreme nationalists of one side or another, for example). Even if you are the most kind hearted peacemaker, after living for a few years in a conflict area, you will come to despise the radicals on both sides, who cause you stress and who will target you, simply because if you are not with them, in their mindset, you are against them. And if you prefer one POV over another (which is completely legit and expected - NPOV does state: "all editors and all sources have a point of view"), you may slowly find yourself drifting more and more into extremism.
This includes:
not editing/creating certain articles ("why help them?", "they can create it themselves");
editing/creating certain articles ("how do you like this?"),
assuming more good faith about your side than the others, leading to
defending problematic editors of one's side (also referred to as "grooming pet trolls" - with unspoken rationale "he may be disruptive, but we need him to combat the even more disruptive editors on the other side"),
supporting problematic editors in content disputes / discussions / eventually, even harassment of others ("because they have the right POV")
grouping "enemy" editors into "tag teams" ("users A and B share similar POV and often work together"), and assuming that they have ulterior motives and at the very least are working against your side (WP:CABAL)
and associating entire group of editors with a given "tag team" ("users A and B belong to nationality M so all users of nationality M are as disruptive as A and B").
Some of the above are acceptable, some are borderline, others are outright bad. Sometimes one may be right (there "may" be a cabal to get you - example), more often one is not (but one may be creating a self-fulfilling prophecy!).
Over time, this leads to more and more WP:BADFAITH on all sides. Good faithed editors will either leave the arena of conflict, finding it too impolite or stressful (thus leading to a vicious spiral decreasing the ratio of good to bad editors in a given topic), or will become radicalized themselves - they will lose good faith, become more and more cynical, bad faithed and radical, to an increasing extent supporting one side or at the very least advocating the use of "ban hammer" with less and less thought ("let's pull the entire neighborhood down, it's impossible to save the ghetto"). With time, they'll find more and more examples to support bad faith (finding even one "true believer" a year may give one a decent sample of "evil cabal" after a few years...). The end result? Certain topics become wiki-battlegrounds. And they spread, along the radicalization (which is like a disease). See model of mass-radicalization of content areas for details.
Solution: Forgive. Assume as much good faith as you can, moderate and even support restrictions/bans on disruptive editors (including "true believers") supporting your side, get mentorship because you may not realize you are crossing the line yourself
Anonymity protects your true identity. There are many good reasons for it. I am not saying "anonymity is evil". Some editors may have very good reason for being anonymous: for example, we have users editing from oppressive regimes, where their participation in this project may be illegal and land them in jail or worse (remember - Wikipedia is illegal in some countries!). Even users in freedom-loving part of the world can have good reasons for anonymity - if one edits articles on porn or other taboo subjects, even with the best intentions, it nonetheless may not always something one may want to have associated with one's name. Or perhaps somebody is editing an article on a mafioso and would rather he didn't know who is updating his record? So certainly, anonymity should not be banned.
However, in most cases, it's not helpful. Vast majority of editors do not live in places where contributing to Wikipedia is illegal, or do not contribute quality content to articles involvement with which could be dangerous (to one's reputation or anything else). Most anonymous editors simply lack the moral courage required to link their real persona to their POVs, and worse - emboldened by being anonymous, they write things - in articles and in discussions - that they wouldn't have written if this could be traced to their real person. Anonymity allows others (including most "true believers" or pure and simple trolls) to hide under a nickname, while launching uncivil attacks against others - including non-anonymous users. Anonymity makes it easier to engage in dubious editorial behavior - from edit warring to personal attacks.
Sure, people do get attached to their anonymous personas (academic studies have proven that much), and some anonymous nicknames have considerable respect on Wikipedia, but in the end, an anonymous editor with bad reputation can always "restart", even after a block. Non-anonymous cannot (at least, not as another real non-anonymous account).
Non-anonymous editors are more vulnerable to harassment and civility violations (and let's not even talk about cases of real life harassment, which have happened). Non-anonymous users are more likely to leave this project, as they don't want their real life reputations ruined.
Wikipedia officially wants to attract academics, and become more reputable thanks to their participation. Three examples from several academics I know illustrate their dissatisfaction with how they are treated on the site. Two of them revealed their true names; one is glad he didn't. Two of them left the project and one is considering leaving. Why?
Editor A as his first edits added some external links (some were quite relevant, some were indeed too detailed). He got accused by an anonymous admin of spamming; he got offended and left, saying that he has better things to do with his time than to contribute to a project and get such strong words in return (I know he was planning a major rewriting of several key science articles - he never did so).
Editor B, considered by many editors a good and civil content creator, got accused of "academic dishonesty" on talk of an article by an anonymous user, known for rash and uncivil remarks, and left soon afterwards saying that he cannot participate in a site and risk such slander becoming associated with his real name.
Editor C, contributor of hundreds of high-quality articles, got accused of "copyvio" in the middle of drafting an new article (original source was already referenced and paragraph in question was from the start partially rewritten). He was highly offended by the accusation of plagiarism, and stated that it does not make him want to contribute more, if he can face such slanderous accusations.
One could ask, of course - what good is non-anonymity? Shouldn't we just advise all editors to be anonymous and create an equal, anonymous playing field? The answer is no, since non-anonymous users are inherently better for Wikipedia than anonymous. First, they have a moral courage to associate their real life persona with their views, displaying a rare strength of character (or sometimes, plain naivety, particularly in case of older people who haven't even heard of flaming). Second, they bring the authority of their real life persona to the articles and the project: it's beneficial to articles and editors to know they interact with experts, and it improves the image of Wikipedia (Britannica's last argument is that it's edited by experts, and Wikipedia is the work of amateurs). Needless to say, anonymous expert is hardly reliable. Third, they are less likely to risk being uncivil/dishonest (since their real life reputation is at stake).
Yet being non-anonymous is not promoted nor protected in Wikipedia community. This is simply illogical. I am not arguing for banning of anonymous accounts; they should be allowed. However, being non-anonymous should be promoted, and non-anonymous users should be rewarded for their special dedication to this project.
Solution: there should be an officially recognized level of usership for non-anonymous users. There should be a way to certify you are who you are (for example, by making a $5 donation to the project with a credit card with your name on it, or by demonstrating (via a website, blog, etc.) that you are who you claim to be; Template:User committed identity may also offer some solutions). The non-anonymous editors should be very strictly protected from slander and flaming (akin to WP:BLP), and there should be a protection level for articles that would allow only non-anonymous editors to edit them (thus shutting of anonymous "true believers" from it).
2010 update. I use my real name on en Wikipedia. This allowed some disgruntled, anonymous opponents of mine to create hate page(s) on the Internet. I've seen and/or heard of this (off wiki, anonymous slander, even with death threats and real life stalking...) happen to others who use their real name here.
Is the lesson that we should have been anonymous from the start? I still don't think so. This would mean we give in to the power of the anonymous trolls. I still believe that the solution is to embrace non-anonymity. Anonymous editors should not be welcome here, unless they can prove (in a private discussion with an officially recognized WMF body) that they have a need to be anonymous (living in a totalitarian country, etc.). As long as we are anonymous, Wikipedia is seen as less respectable then it could be, and its culture is much more open to trolling and other forms of abuse.
2023 update. In the last few years I've received even more harassment, very serious harassment in fact. However, all that I can add to the above is that people should carefully consider whether they have the stomach to handle it. If you worry that your health, safety and/or career could be impacted by ill-intentioned actors who want to chase you off Wikipedia and/or get revenge for some perceived slights, you are better of being anonymous. For the few who put their real life reputation on the line, however, we need some way to reward this and mitigate the risks involved. WMF does a lot to protect the anonymity of those who chose to be anonymous, but not enough, I feel, to protect those who try to effectively protect others and reinforce the reputation of this project by pre-emptively discarding the cloak of anonymity.
Why good users leave the project, or why civility is the key policy
I've seen too many good editors - including real like academics, as outlined above - driven away from Wikipedia by anonymous "true believers" and worsening atmosphere due to radicalization. In most cases, the same process occurs: good editors get involved in pointless, stressful discussion with "true believers" and will become target of their uncivil personal attacks (baseless accusations of "academic dishonesty", "nationalism", "antisemitism", you name it). They may also get baited into some edit warring (since "true believers" like to edit war, and often the only interaction in an article between normal editors and "true believers" is reverting one another). That leads to wikistress ("why am I contributing to this project, if all I get as a thank you is flame and trolling?"). Good editors will then leave, not willing to spend time creating quality content in exchange for flames and in worst cases, slander against their real life persona, and satisfied incivil flamers will move on to new targets.
Update: We now have some hard numbers of contributors leaving. 38% of editors who left point to unpleasant atmosphere and "some other editors" as to the reasons they left; that number goes up to 61% for editors with 10+ edits a month.
Editors who build an encyclopedia should be encouraged. Editors who chase away encyclopedia builders should be discouraged. All should respect the content creators, who are the reason this project is useful to the world. Gnomes, mediators, MediaWiki designers and so on should of course not be forgotten about, as well.
Caveat: Editors are not equal. Of course even the greatest content contributors should not be given a carte blanche with regard to personal attacks or such: they may drive away more people who would have created more content than they themselves do. However, experienced users and prolific contributors should be given reasonable doubt when they say they know more than new and less active users. Knowing when and how much doubt to give is what makes a good administrator; unfortunately, too often an administrator will elect to safely "follow the law" and damn quality content creator to wikihell rather than risk investigation and involvement in a controversial dispute.
Editors who are more likely to create content than to damage the encyclopedia (with edit warring/incivil remarks/etc.) should not be blocked (if more surgical remedies, usually aiming at deradicalization, can be applied). They are the true motor of our project. Editors who are being disruptive by playing wikipolitics, harassing content creators and so on, should be blocked. They are parasites that threatens to strangle us.
A novice was once curious about the nature of the Edit Count. He approached the Zen master and asked, "Zen master, what is the nature of the Edit Count?"
"The Edit Count is as a road," replied the Zen master. "You must travel the road to reach your destination, and some may travel longer roads than others. But do not judge the person at your door by the length of the road he has travelled to reach you."
And the novice was Enlightened.
I love this quote (source), but the there is more to that issue than the quote covers.
Type, quality and amount of activity of an editor is of paramount importance. To a varying extent, this can be quantified (it's called social science, and social science use statistics; see Wikipedia:Wikipedia in academic studies, a page I created and maintain, for hundreds of relevant studies, which even quantify such ideals as "trust" on Wikipedia (ex. Dondio and Barret, 2006)). There are tools (WP:COUNT) that allow an analysis of editing patterns. Through at first look they may be seen as a drug for Wikipedia:Editcountitis, in fact they can provide much insight into editing patterns (some can break down edits by type, time and variously defined quality).
First, type of activity. Editors who create content and/or do wikignomish tasks (including building our policies and such) should be valued over editors who treat wikipedia as discussion forums, or even worse, places to flame and create battlegrounds like on Usenet. The editcounter tools allow to a verying degree discern percentages of edits per namespace (and chronological trends); obviously a user whose 90% of edits are in mainspace is different and likely more valuable (as a likely content creator) compared to a user whose 90% of edits are in mainspace talk/user talk (and who thus is likely a flame warrior).
Second, quality of activity. We have various peer-based ways of recognizing quality of content (being able to write Featured or Good articles, receiving barnstars from a wide group of neutral editors (not tag team buddies!)), and so on. There are also some more esoteric ways to analyze quality (see Wikipedia:Wikipedia in academic studies for more). Bottom line, the better the quality of edits by a user, the more valuable he is.
Third, amount of an activity. Again, we should be careful of editcountitis, but the more active the user, on average, the more valuable to the project he is.
There is a very important difference between judging an editor based on the total number of things he has done, and judging him by comparing him to an average editor. Consider the following examples:
who is a better content creator? An editor A who has created 1 Featured article per month for the past 6 months, or an editor B who has created 10 Featured articles over 4 years? What about editor C who creates one DYK weekly and has been doing so for the past year, compared to editor D who has been creating 3 DYKs weekly for the past half a year? Sure, the answer here is "both are great content creators". But what about
who is a revert warrior? An editor A with 1 3RR violation - who has joined Wikipedia this week? An editor B with 6 3RR violations, editing for 6 months? Or an editor C with 2 RR violations, editing Wikipedia for 4 years? Consider my first essay above, about inefficiencies of ANI/3RR: if an admin just looks at the recent history of an article and sees two editors (B and C) edit warring, is he right to call them both edit warriors? I think not.
who is uncivil? An editor A with 2,000 edits per month and one confirmed uncivil comment per month, editing Wikipedia for 5 years and thus with over 50 uncivil comments on his record? An editor B who has joined this project a month ago and has made 1k edits, 5 of them uncivil? An editor C who has been editing for a year, with about 20 edits per month, but half o them uncivil (over a hundred total)? Without knowing the average level of incivility on this project, can we say that editor A is uncivil? Is one uncivil comment per month, among 2000 different edits, enough to say that? What if it is way below the level of incivility of an average editor?
who has lost trust of a community? Let's say than an average editor makes 10 edits per day and is criticized once every 10 days (thus once for every 100 edits). An editor that is 10 times as active (makes 100 edits per day) and is half less criticized per edit (this once for every 200 edits) will still rack up one criticism every 2 days. If one states during a dispute resolution: editor A is disruptive and has lost trust of the community, he is 5 times as often criticized as an average editors because he is criticized every 2 days instead of every 10 days like an average editor will be doing injustice to editor A, who is actually twice less disruptive than an average editor - he is simply 10 times as active... The only "fault" editor A has is that he is 10 times as active as an average editor - should he be ordered to limit his activity? Or should we say that "if you are ten times as active, you should be ten times as civil as an average editor"? Ridiculous, isn't it? Yet I have seen active editors, civil above the average, criticized in that very way.
ignoring time patterns can be perilious. Time patterns show radicalization: if an editor has been editing for 5 years, has created much content in his first 3 years, but for the past years has mostly edited on talks, it likely shows he has radicalized, and needs to be "reformed back".
Editcountitis teaches us a very important lesson: total numbers are much less meaningful than the averages, particularly when combined with time patterns. Assuming one's habits don't change in time (which is not always true - editors can radicalize, or de-radicalize - but let's leave this aside for now), total numbers can penalize active and long term editors: consider a dispute resolution, in which an editor with a 100 days of editing history, 10 edits per day, 1 edit per day incivil (thus 1000 total edits and of them, 100 incivil comments) is claiming that he is as uncivil as an editor with 1,000 days of editing history, 100 edits of day, 1 edit per 10 days incivil (thus 100 incivil comments as well - but spread among 100,000 total edits!). Both editors can present the same numbers of diff, but it is clear one of them is much more of an incivil flame warrior than the other. That said, this is where time pattern analysis should be used: if our active editor has radicalized recently - if 75 of his uncivil comments occurred in the past 100 days of his activity - the situation is different is his uncivility pattern has been stable.
Another factor to consider is that "mud sticks". On Wikipedia, things are publicly archived and can be easily brought back with links and/or diffs. The more active one is, the longer one has edited, the more problematic edits will one accumulate. Unlike in real life, where history fades and is forgotten and/or more and more difficult to trace back, on Wikipedia, past edits can be discovered and brought back much more easily. Further, the more active one is, the more feathers is one likely to ruffle (as I've noted in the discussion of adminship). If an editor A (or a tag team) has been criticizing editor B for 2 years, reputation of editor A will be much more damaged than if he has been doing this for a month. Being able to put things in the perspective via time and activity patterns is crucial, otherwise if one simply considers the total numbers the simple conclusion is: the more active an editor, the worse he is (if we look at disruptive edits; of course analogically, if we look at good edits, the reverse is true).
If we can calculate, or at least assume, averages for editors per edit and time, this tells us much more than total numbers. Adjusting this for time patterns can help to detect radicalization; such editors should be cautioned and mentored (this should however be a good faithed, friendly approach - not punishing and alienating them, at least not until they show no sign of improvement).
With the above tools, editors can be ranked. Ranking, of course, is not perfect and is not uniliear (editors should have several ranks), but to certain extent this allows important, objective, comparison of editors. Such a comparison is vastly preferred to anecdotal comparisons based on cherry-picked examples (often with the goal of slandering an editor).
Solution:
) Development of tools for analysis of user editing pattern should be encouraged. Editors can be ranked.
) Flame and edit warriors, true believers, and so on can be detected and should be warned, mentored and if unreformable, banished from the community.
) Radicalization can be detected and should be counteracted as above.
) There are small lies, big lies and statistics. When making decisions, statistics are important, but care should be taken to ensure they are not biased. Usually people when arguing their case will try to cite statistics that support their case, but which may be highly misrepresentative (ex. if an average editor makes 1 uncivil comment per 1000, an editor who makes one uncivil comment per 10,000 but has been editing for years and has been extremly active can appear somewhat uncivil - this is however an unfair judgment, in fact penalizing an editor for above-average activity).
Evil cabals - groups of editors who want to disrupt the project - do exist, but are rare. However, cabals themselves are common. The Wikipedia project is built around collaborative software and cooperation with other editors (see also ethic of reciprocity). Over time, groups will form, be they around wiki-organizations (WikiProjects), status groups (admins...) or less clearly defined groups of editors. This, however, should not be taken as a sign of anything wrong. If editors are disagreeing with you, consider that the most logical explanation is that you are wrong and/or in violation of the site policies (and if you don't even want to consider this, you have a problem), not because there are evil cabalists bent on getting you...
99.9% times the editors cooperate, they do this to improve this project. Sometimes they will get accused of doing it to damage the project. This often occurs due to radicalization, as editors on some subjects divide themselves into camps and increasingly assume bad faith about the other side. This may lead to a self-fulfilling prophecy and formation of defensive cabal(s), since when a group of editors is often enough accused of a conspiracy, they may form one simply to more efficiently defend themselves. This is yet another factor contributing to the fact that cabals are common.
Solution: if you see a group of editors, assume good faith. They are much more likely to be working together to help the project, not to undermine it. Don't force the others to band against you. You can be your worst enemy...
Model of mass radicalization and conflict generation
This model tries to explain why certain content areas (for example, Central/Eastern European history), are much more likely to generate battlegrounds than, let's say, geometry.
1. In every content area, a small percentage of editors display the signs of being "true believers" (uncompromising POV pushers). True believers misunderstand or ignore WP:NPOV; they act as through their POV was NPOV, refuse to recognize they have a POV (they often claim they represent NPOV), refuse to compromise on content with editors of other POV, and treat all who disagree with them as enemies. They frequently edit war and refuse to back down on talk.
2. Wikipedia model in general and content related dispute resolutions procedures in particular do work; thus "true believers" in most cases find themselves in the "losing position" - the neutral/mainstream community ensures their POV is given only due weight.
3. This may however take much longer in highly specialized content areas, where fewer neutral editors will notice disputes. There, the relatively few editors know each other much better, and radicalization (process where a normal editor turns closer and closer to a "true believer" - at the very least, they assume good faith for "their side" and bad faith "for the others") is more common, leading to rise of tag teams or at the very least formation of content-based sides/camps/alliances/etc. Thus battlegrounds are more likely to arise in such content areas (but the model is probably true for all content areas).
4. Because "true believers" are likely to lose content disputes (as their edits are stamped out to due weight by the neutral/mainstream community) they turn to harassment, personal attacks, and similar acts. Whether they do it on purpose of due to frustration of losing one content battle after another, the end result is a high count bad faith accusations and battlegrounds in articles where they clash with others.
Technical note: If both parties ("true believers" and "victims") claim they are right, how to easily identify who is who? There are two ways: 1) Look at who's supported by neutral editors (moderators, etc.). Two caveats: users involved in content may be biased due to radicalization, users "just passing by" may be confused by "sticking mud". 2) Look at the content creation: users who can write peer-reviewed and recognized content (FA/Reviewed A/GA) probably know more about NPOV than those who don't.
Ideally, admins should be a paragon of virtue, examples to follow, liked and trusted by all the community.
However, we don't live in an ideal world. Admins are normal people and occasionally make mistakes. Further, as anybody in the spotlight (famous people in real world... or Jimbo here), they cannot please everybody. The more active an admin, the more likely is he will become somewhat controversial and will have enemies. Heck, if an admin ban trolls, the trolls will dislike him :) Admins often police wikipedia, enforcing policies and dealing with troublesome users: consider how police officers in real life can be unpopular. If admin edits content, those with opposite POV, particularly "true believers", will dislike him. In some areas, plagued by tag teams and such, admins can become targets of harassment - when they try to enforce the policies, the editors who disrespect them will try to paint them in the worst light possible. Tough life, but adminship should not be seen as popularity contest.
The above not only makes it likely that an average admin will not be liked by everybody, but it also influenced who becomes an admin. In theory, as long as admin respects NPOV and other content policies, his personal POV (ex. is he pro-life or pro-choice, or is his pro-Russian or pro-American and so on) should be completely irrelevant to whether he should or shouldn't become an admin. Yet I have seen many RfAs where a good editor, who has however shown some controversial POV (not breaking any rules, just clearly identify with one or more POVs), was attacked by his POV opponents ("true believers"), who flocked to his RfA with oppose votes (WP:TAGTEAM?). That made many neutral editors express doubt along the lines "if enough people oppose him, there must be something to it", and chose to not support him. That, in effect, torpedoes such nominations.
On the opposite side, it is also common (if much less problematic) for an admin to get high percentage of supports from editors who share his POV. I say it's less problematic as I have not seen ineligible candidates elected simply because editors with sympathetic content POV overwhelemed neutral editors, but supporting somebody just "becuause they have a similar POV" is not a good argument for adminship. However, this actually is most apparent in cases were an admin candidature has been targeted by one group (those who don't like candidate's POV), and than the other group(s) will come to the rescue. Usually this indicates high degree of radicalization of editors involved.
Perhaps the problem can be best illustrated by words of three editors I know:
first editor is a Polish wikipedian, whose editing pattern on en wiki differs from editing pattern on pl wiki; also from discussions with him on talk I realized that he was not editing certain articles he was interested in. When I asked him why, he told me: "If I edit those articles or discuss them, I will become controversial. A controversial editor cannot win RfA. I have to be an uncontroversial nobody with some positive edits in noncontroversial article to win RfA. Once this happens, I'll be able to edit what I want and show my real POV."
second editor commented along the similar lines, saying that if one wants to be an admin, he should avoid any controversial articles and expressing POV. He also suggested that previously controversial editors, if they want to become admins, have no choice but to vanish, edit uncontroversially under a new account for few months, and than apply to RfA, which would otherwise be stacked by their opponents
a third editor told me: "you would never pass an RfA today, you passed it when you were a nobody - today you are somebody and somebodies don't pass RfAs"
The system - of electing new admins, and of criticizing existing ones due to their POV - is obviously broken,: it can be gamed, and is unfair to editors who express an unpopular POV before becoming an admin (and who thus have a much less chance of becoming an admin if instead hid their true POV and "cheated" the community).
Solution:
) If an admin abuses admin powers, he should be desysoped.
) If an admin gravely abuses community trust, he should be desysoped. All admins should be open to recall.
) Admins, just like everybody else, are entitled to their POV(s) and cannot be expected to abandon it and become NPOVed angels. Being an admin should not be seen as having no POV. Admin content edits should be ignored when considering his conduct as an admin. You should not care if a police officer is pro-life or pro-choice, you should care if he is using his powers correctly (is he shooting at random people?) or if he is otherwise unfit (breaks common norms of civility, decency, etc.).
) Admin candidates should not be penalized for being unpopular or for having a particular POV. Existing admins should not be claimed that they have lost the trust of the community, if those claims are centered not around any misuse of admin powers, but around their content POV and content edits.
) If an admin has some POV enemies ("true believers", "tag teams"), they should not be allowed to create an illusion that the community in general has lost trust in him.
On why so many admin heads are seen sticking in the sand when push comes to shove
Increasingly I see admins not willing to "take a stance" against disruptive users. Sure, the classic trolls are as efficiently dealt with now as they were with in the past. But more insidious disruption is often ignored, and admin noticeboards are often criticized as "justice lottery/roulette".
First: dealing with disruptive editors is not as clear cut as dealing with kid troll replacing a page with an obscenity. It takes more time to spot a disruption, and spotting smart and dedicated edit warriors, "true believers" and CIV-skirting flamers is difficult. Disruptive editors may be experts in harassing and chasing good community members outside of the project - but expert CIV violators and flamers are difficult to deal with (which is why we have WP:ANI/3RR but attempts to create an equivalent board for dealing with CIV violations (ex. WP:PAIN) have failed). Hence, getting to the bottom of a case involving such problematic editors is not easy, thus many will either avoid it ("I don't have time for that...let somebody else deal with that mess") or can be too naive and thus mislead.
Second: Disruptive editors can be aided by tag teams and radicalized users, creating an appearance that they have at least some community support behind them. They or their allies can harass the admin that take the stance and warn/restrict/blocks them (or their allies), and they can damage the admin's reputation. Thus admins will often be burned in one or two cases, and then they will be too intimidated to take action against certain editors or anybody resembling them.
Third: Failures in election of new admins lower the quality of an average admin, as the selection process prefers non-controversial editors who already are good in not taking sides and generally sticking their heads in the sand. The same editors may be more concerned with letter of our policies, not the spirit (since upholding WP:IAR or WP:CIV is much more difficult and controversial than wikilawyering with more clear cut policies like 3RR). And of course it matters who you are and if you know how the game is played.
Thus the reason for AN(I)/AE "justice lottery": 1) it matters who you are and who your opponent is (your wikipolitics experience - can both parties "frame" their case to sound good for them?; your reputations - when people see who the involved editors are, are they likely to think somebody is a good/bad guy right off the bat? are they going to be scared to oppose one or both sides?; allies/tag teams - what is their reputation? can they create the illusion of consensus behind a side?) and 2) it matters if the admins who pick up the thread are interested in letter of the policies or spirit of the project.
With all of that said, I fully appreciate the danger of too strong controls: I have seen boards where admins (mods), believing themselves to be Gods, and wielding power with little responsibility, run virtual versions of dictatorships. However, the current system, where incivil editors have the upper hand, is clearly not working, and I deeply fear that with admins preferring to stand by why the projects goes up in 'net flames, Wikipedia may be heading for the same flaming hell where the Usenet is now.
Solution: Enforce WP:CIV and deal harshly with flamers who create battlegrounds
A new observation occurred to me: editors who are not involved in "wikipolitics" - AN(I) discussions, and other discussions about key (or currently hot) policy topics are at a serious disadvantage when it comes to many dispute resolution procedures: they have few friends in those circles, and are more likely to be seen as wrongdoers than those more known. Yet the problem is that the editors not involved in "wikipolitics" may be the most profilic content creators - editors who are here building the encyclopedia. Yet when they clash with wikipoliticians, ones who are building the rules and regulations, and/or know their inner workings, they are seen as outsiders, and a wikipolitician bystander is likely to side with his friends - or at least with editors he recognizes - rather than with wikipolitical newcomer. Further, those not knowing much about wikipolitics will not know what to report/complain about, and will often be chased off the project, not being able to use the dispute resolution procedures to defend themselves.
Established wikipoliticians, commonly members of one or more cabals, can reliably and consistently count on mutual defence. Newcomers and loners will not have such defense and thus will often have to fight their battles alone (with predictable result of being seen as "against the consensus").
In game theory sense, solving wikiconflicts through reason and good faith is an unstable strategy on Wikipedia because every single participant can play dirty (wikilawyer) instead, and thus derive an advantage over those who don't know much about wikipolitics. In collegial wikis -- usually, small wikis -- other social factors can make such horseplay easier to spot and thus unfeasible, so solving conflcits through collegial discussion can remain a stable strategy. But once you are past the Dunbar's number, all bets are off. Further, the winning side will be socially rewarded (recognized as "they were right", etc.), and the losers will have their reputation tarnished. Thus, we have two sides of a feedback loop: exercise of wikipolitic power improves chances of winning conflicts, and winning conflicts increases wikipolitic power. In such a system, the feedback loop facilitates a sort of evolutionary pressure that favors accumulation of power of experienced wikipoliticans and disfavors loners who prefer encyclopedia-building to wikilawyering.
So, yes, if you edit *anything* arguable on Wikipedia, you'll have to join a cabal or the other cabals will get you.
Conclusion: Any active content creator will become involved in wikipolitics (or leave the project refusing to do so), unless he edits totally uncontroversial articles or doesn't care what others do to his work.
Building on the argument about the importance of wikipolitics, as well as on comments and experiences from some dispute resolution proceedings I witnessed, a thought occurred to me: there are many editors who have no skill, time and will to participate in even the most crucial wikipolitics - dispute resolutions involving them. Just as in real life, we need wiki-advocates to represent such editors. The real danger is that good content creators who join this project to (gasp) create content, may not want to spend time in more vicious dispute resolutions - not the normal ones involving content discussions but the virolous ones, where they have to defend themselves against slander, harassment and such. Some editors, who join this project to build an encyclopedia, may, upon encountering this deeper, and much nastier level of the project, be so annoyed that they will stop participating - particularly when they realize how time-consuming, wiki-policy knowledge intricate and not the least, lack-of-good-faith stressful they are. We used to have an Association of Members' Advocates, but it disappeared - a shame, since it is becoming more and more needed as Wikipedia becomes increasingly bureaucratized and wikipoliticized.
If the fact that we need wikilawyers (in the lawyer/advocate, not wikilawyering/rule stretching/time wasting concept) isn't a bad sign of the ills of the community, I don't know what is.
Solution: In a wiki world were wikipolitics is a sad reality, as much as we should avoid it, editors either need to become well versed in it, or to rely on those who will do so for them...
On why the little editor doesn't matter (but should)
It has been my sad observation in the past year or so that the volunteer - the editor - the individual - is no longer important. Bureaucracy, red tape and oligarchies are becoming arrogant and know-it-all. Newbies often don't get the benefit of WP:AGF, WP:BITE is forgotten, instead of warnings and escalating blocks, indef blocks are increasingly common for even small offenses, and the ArbCom wields the stick, pointing out to editor's errors, with much more enthusiasm then it is willing to admit editor's innocence, dedication to the project - or its own failings.
Solution: avoid wikipolitics and remember why we are here. We are not here to build our own power structures, wiki-organizations or create a brave, new world. We are here to allow editors to write an encyclopedia. The admins, and even the arbitrators are servants of the community, not elected lords and masters.
On Wikipedia, things are publicly archived. The more active one is, the longer one has edited, the more problematic edits will one accumulate. Unlike in real life, where history fades, on Wikipedia, past edits can be discovered and brought back much more easily.
Further, the more active one is, the more feathers is one likely to ruffle (as I've noted in the discussion of adminship), and the more chance one has to become a target of some hateful editor (or misguided do-gooder). So the longer somebody has been with the project, the more he has contributed, the easier he is to attack, by dragging his past mistakes.
Worse, one does not need to have done real mistakes to be a victim here. Often, what is framed as his past mistakes might have not been declared as such by a consensus: it's enough that one editor has called his action a mistake - a diff can be always dragged out of a context. Out of context evidence citing can take many forms: an editor that has been a target of an arbcom, even if he was innocent, will have the stigma of "being a subject of an arbcom", and his reputation is more vulnerable - when his opponents will mention his arbcom all around, without saying that this arbcom didn't find him guilty...).
The end result, sometimes the end goal of sophisticated harassers, is to destroy the reputation of editor A, by creating an image of him among the community that he is "often criticized", leading to a mistaken impression that "if he is so often criticized, he must have done something wrong".
While some incidents may be accidental, editors may find themselves targeted by an editor or a tag team with a grudge and a well-thought out strategy: criticizing an editor for months or years, ruining his reputation and raising his wikistress levels, so that he will be looked with distrust by members of the community, that he runs away from the bullies, letting them win edit disputes simply with the fear that they will harass him again - or that he simply leaves Wikipedia in disgust.
Being able to put things in the perspective via time and activity patterns, as well as context, is crucial, otherwise if one simply considers the total numbers the simple conclusion is: the more active an editor, the more he stands up to harassing users (incurring their wrath), the worse his reputation will be.
Solution: if faced with an unfamiliar dispute, don't "believe" the statements. Investigate the issue in detail, to make sure you are not missing crucial facts, and be aware that some editors may try to misrepresent certain facts on purpose, to make the other side look much worse then they really are. Oh, and remember why are we here and what this entails.
When to use the banhammer - and when not to: a simple math
When considering whether to apply a ban or a block, a crucial question should be (but often isn't): will this hurt or help the encyclopedia? From this follow important questions that should be asked by an admin before he swings the banhammer:
if I ban the editor, will the encyclopedia lose valuable content? If so, perhaps other forms of restriction should be considered (warnings,* topic bans, 1RR restrictions, mentorship, civility paroles, etc.). Remember: a user restricted & reformed (deradicalized) is a user that helps the project. A user banned is at best not damaging it further.
but also
if I don't ban the editor, will the encyclopedia lose valuable content?
Solution: do a simple math and consider whether the encyclopedia will be better of with a ban or without it. Do consider the ripple effect of the ban, not only the immediate consequences.
* I find it very strange that many admins, instead of warning people, prefer to block them. Sure, brute force approach is simpler, but hardly helpful to the project.
Having said that, I do not mean we should accept copyvio images. In fact, I appreciate Wikipedia's attention to enforcing the copyright; the ensuing inconvenience educates people about the impracticality of the current regime. However, if the copyright is unclear (usually because of an obscure law being confusing, or owner being very difficult to trace) I support erring on the site "copyright expired". What annoys me the most is that we delete images where there is virtually no chance of us getting sued (alas, most of the community feels otherwise). For similar reasons, I don't see why we should be so unfriendly towards fair use.
I often hear the argument that WMF cannot afford to get sued. I do not buy it, not without seeing an argument backed up by numbers saying that an average suit costs X, and we would not X from the enraged worldwide community. IF we ever get sued over copyright, I expect that it will be a very beneficial suit - both for Wikipedia (WMF) and the free culture movement. Such a suit is likely to be covered not only from WMF budget but from other sources (EFF, CC, and similar organizations), should increase the public understanding of free culture, and should generate good publicity for Wikipedia/WMF/free culture (and bad for the whoever is suing us). We should not push for such a suit, but if it happens, I think we should embrace it as a chance to change the world.
Instead, what happens is that Wikipedia is the Internet-expert in enforcing copyright policies nobody else understands or cares about (although occasionally this works out, for example the concept of freedom of panorama was brought to scholarly attention DUE to this topic being so widely discussed on Wikipedia; de Rosnay, Mélanie Dulong, and Pierre-Carl Langlais (2017)). We are also great at protecting the rights of Nazi photographers (yes, yes, Godwin's law...), whose photos may be found in the German National Archives, but "we know better" (ex. case study 1). Or protecting the right of anonymous authors whom nobody can identify (case study 1).
I find it sad that our policies support being very defensive and scared of the copyright laws. They are wrong, and we should be at the forefront of challenging them - just like we did already, by adopting free licenses and showing how great of a project we can make. Time for a little more civil disobedience, I say...
Let's face it. COI isn't a good policy. Everybody and their dog has a COI of some sorts, and the arbitrary blurry line we have is hardly a solution. Just read NPOV, which clearly states nobody is neutral and everybody has a POV, which is acceptable. Every POV creates a COI. This is a contradiction in our policies, and I don't know about you, but I'll pick NPOV and encyclopedia-building over COI and it witch hunts any day.
Let me say this simply: paid editing should be allowed, if disclosed. Why should we care what is the editors motivation for creating articles? People do it because of various reasons, not all of them pure and altruistic. What about editors who create articles not for money but to please someone or as a favor? What about students creating articles for an assignment, and getting paid in (hopefully) good grades? What about editors creating articles to get wiki awards? What if I win a $ prize for a Wikipedia article? What if I promise $ to those who contribute to certain articles? What if I include the fact that I've written 1,000 Wikipedia articles in my CV? What if contributing to Wikipedia becomes accepted in academic community, and will contribute to one's academic career? We shouldn't care if somebody is doing this for $, as long as content is good (npov, notable, etc.) and this is disclosed (for COI-npov analysis). In the end, there should be only one question to ask: are paid articles helping Wikipedia or not?
Discrimination against paid editing only leads to incidents like this, where good editors have to break other policies (socks), lie, withhold truth and feel ashamed.
Counter to arguments that paid editing will drive volunteers off and make the project dominated by for-profit editors: 99% of people today edit Wikipedia for fun, this will not change. People don't care that some people are being professional gamers or players and are making $$$, they still play games/sports. NBA doesn't mean that fewer people are playing basketball. "Why would User:X slave away half the nights editing Arab-Norman culture for nothing?" Because he enjoys doing so, and can apparently make good living without editing Wikipedia for $. Not everyone is driven by $, so not everyone will switch to paid articles. And what about the possibility of non-profit organizations offering $ to have people write articles on, let's say, Arab-Norman culture? Counter to point 3: paid editing is just one of many POVs. Personally I worry more about religious fanatics and secret government agents then PR firms. Money just has a bad rep in some leftist fora :)
And has anybody considered that in some Third World countries, this may actually be a way for some people to get out of poverty? To all of those who are against paid editing - I presume you don't need that money, good for you, but what right do you have to tell others that they cannot earn their money this way? If even one editor in a Third World country can get money for editing Wikipedia and get out of poverty (and possible starvation, sickness and death) by being paid to edit Wikipedia, he has my blessings.
On what I learned editing Eastern European content area for over five years
I have had a lot of time to think about recent and not so recent events, and I've distilled my thoughts into the following analysis. I tried to compress my 5+ years of EE experience and several DR proceedings into it, I hope you find it useful.
WP:EEML was I think the 4th major EE arbcom that I recall, but I know there were some smaller ones as well. The big ones seem to reoccur regularly around fall. There is an English proverb that seems quite applicable here: "Once is happenstance. Twice is coincidence. Three times is enemy action." (see also the "1 2 (3) Infinity" principle)
The enemy here is not "the other side" (and generalization into two sides is a major oversimplification anyway). The enemy here is "us" - all the editors who became involved in EE areas and became radicalized over time. Sure, some are worse than others, but nobody here is a paragon of shining virtue.
What needs to be done to end this vicious cycle of EE arbcoms every year? I for one have enough of them. Yet by myself I cannot end them. Even if I leave the project, nothing will change.
However many sides are out there, they have all proven over they years that they can replenish their ranks. Both leaders and man-at-arms can be banned or leave - but others will step in their shoes and the "battleground" will continue. Those who left only serve the new generations as martyrs - "remember X and Y who were chased off by the others!".
Hence any arbcom remedy that is as crude as a block/ban is futile. Few months will be just a delay, indef means a martyr whose role will be taken up by another, or an infestation of socks. Worse, a desire to block an opponent one cannot deal with in terms of content creation leads some to actively engage in wiki-harassment, often combined with a wikilawyering mentality, often contributing to the cycle that ends up at arbcom (see a more detailed analysis of that problem here).
Amnesties, warnings, admonishments and their ilk are futile as well. Those warned will think they got off easily and can resume their actions, probably in a more sikrit and organized way. For every person who is scared off, another will step in, and the scared off (reformed) editor will be seen as semi-matryr/coward, and peer pressure will be put on him to rejoin his brethren.
Is there no hope? Not quite. I do think that a new type of solution needs to be implemented, one more complex than "block them all" or "do nothing". Major pieces of the solution puzzle have already been suggested - and this is not the first time they were. In fact, Irpen Alex suggested something akin to what I am suggesting last time... shame his idea was never taken up (I should've supported it more strongly then... if I did... sigh).
What I suggest this time is a several-pronged approach to deradicalize editors:
Mediation and collaborative content creation (which should be the goal of it) should reestablish trust between editors, make them see that the other guy is also an editor who wants to help. Mediation should also be combined with mentorship. There should be enough mediators in MedCabal and MedCommittee to step up to deal with a bunch of editors if the Committee asks them (2-3 mediators should be able handle such a large case), and finding a few mentors/coaches (many who could oversee multiple editors) should also be doable.
Assuming good faith, editors are radicalized because they see others around them radicalized and engaged in wiki-combat. Showing them that those they consider "enemies" can in fact be their partners in buildning the encyclopedia is a major milestone in changing that mindset.
Restrictions should penalize editors and limit the chance of further problems. Somebody cannot help but to revert too much? 1RR. Incivil? Civility parole. Loses cool when dealing with editor X? Interaction ban. Loses cool on article Y? Article ban. And so on. Crucially those restrictions should last until mentor/coach/mediator thinks the editors they are overseeing have shown enough good faith and restrain to be allowed to regain their former freedoms. If abused, it should be easy (no red tape) to reinstate restrictions or block them. The editors should know they are watched and will be watched by the community for a long time (there should however be also a watch for those who attempt to bait them, and such editors should be put under severe restrictions as well).
The restrictions should also include measures aimed at preventing forum/block shopping, in particular, bans from reporting others to places like AE (without support of uninvolved admins, perhaps) and in extreme case, interaction and dispute resolution bans. I discuss the logic behind this and a more refined implementation idaes here.
Community service should create a way to penalize editors while teaching them about the project (telling them "this is not what you may want to do but it is what project needs") without turning them away from the project ("you are blocked - we don't want you here") and making them felt betrayed by "the system". Of course it means that blocks and community service should not be combined (editors who refuse to do community service should be blocked, but why not give them a chance to do something constructive first?).
And those who refuse to acknowledge they need to join mediation/take up restrictions/mentorship/community service and so on may end up getting banned. Some people cannot be reformed.
Such restrictions should not only target one side, they should affect all editors involved in the EE issues. To determine who should be affected, lists of editors should be submitted by all parties (but they should not affect editors who were on the administrative fringes of the business, just those involved in content disputes).
This approach has multiple benefits, primarily:
deradicalized editors can ensure by themselves that the battleground will not reappear, and can later coach and mentor newcomers, turning them not into future warriors, but good editors. Thus the troublemakers are turned into gatekeepers
experienced editors will not burn out/be forced to leave, but will continue creating content (and this is the primary goal of Wikipedia, after all)
users who fail to improve under that scheme ("extremists") will still end up banned, but the moderates will be reformed (instead of allowed to radicalize into extremists or chased away)
obvious but: the EE battleground will finally end
To summarize: blocks/bans and amnesties were tried and failed. If we don't want to see this dramu repeat itself in the future, with slowly changing caste but the same set of issues, wasting everyone's time over and over, we need a new paradigm to break the cycle and to deradicalize editors.
There are two aspects to positive reinforcement: 1) rewarding editors who do good work and 2) rewarding reformed, deradicalized editors.
With regards to 1)
This cannot be over stressed: Wikipedia:Kindness Campaign and similar projects need to be supported; they are a vital glue that holds our community together. Editors who contribute to Wikipedia do it because they find that the project is worth of their time, and that it is easy to do. But over time, they encounter problems, and in order to counter them, they need to be de-stressed, and relax. To that end, showing other editors that we (the community) cares about them, appreciates their hard work and wants them to enjoy their stay is a crucial strategy.
"...positive reinforcement is superior to punishment in altering behavior. He maintained that punishment was not simply the opposite of positive reinforcement; positive reinforcement results in lasting behavioral modification, whereas punishment changes behavior only temporarily and presents many detrimental side effect" Skinner, B. F. (1970). Walden Two. Macmillan, Toronto
”
It is in everybody's best interest that the reformed editors stay that way, and positive reinforcement is a good way of achieving this (accompanied, of course, by the threat that relapse will lead to more punishment). This threat is not enough; the positive reinforcement is needed if we don't want to loose those editors; as simply punishing volunteers likely results in them loosing motivation to contribute to the project and leaving (remember: people contribute to Wikipedia not because they have to, but because they want to, and few people are gluttons for repeated punishment).
Note: I am aware of both the cries that the editors are leaving and that all is peachy. The truth, of course, is somewhere in the middle...
(English) Wikipedia's population of active editors has been stable, for the past several years, at around 50,000 (see Wikipedia:Wikipedians for details). You could say: stable is good, right? No.
First of all, we need more editors - just look at the number of inactive WikiProjects and various backlogs, not to mention the systemic bias of the work being done; there are many areas that need more help, and the only way to get it is to have more editors join the project. Sure, you can say - "content is growing, eventually we will do it all, it's just a matter of time". Leaving aside the fact that if the trend of editors growth has now leveled off it can be seen as a warning before the number of active editors starts to diminish, the sheer amount of work needing done (I estimate that our content coverage is barely 1% of the "sum total of [encyclopedic] human knowledge") means that (assuming human knowledge does not increase...) we still need something like 10,000 years to do a good job covering it. Forgive me, I am slightly more impatient than that... :>
Further, the situation is more dire than the word "stable" gives justice to. Slice it or dice it how you want, the bottom line is that since we are still gaining new recruits it means that somebody is leaving. Think about it for a second. Who is leaving? Some newcomers, and some oldtimers. Ideally, we should not be losing anyone, and the loss of oldtimers is doubly worrisome. A significant number of those who leave are experienced editors who contributed a lot to this project - and that are not that many of those (Wikipedia's editors fit nicely into the Pareto principle: (~10% of editors create 90% of content)). While this is somewhat anecdotal, I believe that each experienced user can name at least one respected, quite active colleague who is no longer here... I can name several :( Bottom line is: we are losing experienced editors and gaining n00bies. Now, I have nothing against the newcomers - I welcome them with open arms - but for the project it means that the number of active editors in fact hides a number of editors who are "learning the ropes" instead of building the encyclopedic content (efficiently).
We now have some hard numbers of contributors leaving. Why do the editors leave? 38% of editors who left point to unpleasant atmosphere and "some other editors" as to the reasons they left; that number goes up to 61% for editors with 10+ edits a month. I discussed the theory of that above: #Why good users leave the project, or why civility is the key policy (and linked essays).
If we don't improve the atmosphere around this project, and make it more welcoming and conductive to editors sticking around, this project, at best, will never reach its goal ("sum total of human knowledge"), and at worst, will eventually stagnate and start deteriorating.
I do wonder if it is a good title. What I mean is - there are two types of editors: those who respect WP:IAR and those who don't. For the first, improving Wikipedia matters more then other rules; for the others - it doesn't matter what the goal is, if a rule was not followed, the editor who broke it must be sanctioned.
I have a gut feeling that in the old days, the first few years of Wikipedia's activity, a higher percentage of editors respected IAR than today. It is just a gut feeling, though, based on some hard-to-express observations of how the community has been chancing, and what others wrote on those subject.
A study of WP:IAR references over the years could be quite enlightening...
On why the most active and dedicated editors are seen as a danger to the community
There is aninterestingstudy that shows that most dedicated volunteers are a common target of attacks from within their own community. There are two major reasons for such attacks: 1) some see the very active and selfless volunteers as raising standards too high and/or making regular volunteers look bad; 2) others see them as social rule-breakers; deviants, weirdos whose high and selfless activity is "just not right", they are threatening the community with their very existence (they are so different, there must be something wrong with them, they are not like regulars, and thus must be chased away).
This time I took a rare digression into an issue dealing less with behavior of editors, and more with content quality. For that reason, I've decided to split this into its own essay here.
On the importance of under-appreciated WikiProjects
WikiProjects are an ancient Wikipedia institution, dating to the very beginnings of Wikipedia (the idea was first put forward in September 2001). It is my belief that WikiProjects are of vital importance to our community; at the same time, they have never been a focus of much attention from the WMF (with regards to research, or general support), or from the community. This seems to be a major blindspot in our vision; we are constantly looking for how to improve our content, and editors' number and retention - yet we are ignoring one of the best tools at our disposal (yes, the WikiProjects).
Let me pose several hypotheses that I believe should be studied:
WikiProjects are conductive to developing friendly and motivating atmosphere, thus enforcing editor's retention ratio;
WikiProjects are conductive to developing content in their area (there should be a correlation between activity of a WikiProject and quality/quantity of related Wikipedia content);
general participation in active WikiProjects is relatively low, with only few significantly active editors making most WikiProjects alive. It is likely that most active WikiProjects are run by only several dedicated individuals;
most WikiProjects are inactive, as the few editors who kept them alive became inactive, and where not replaced by a new generation. As of 2 April 2012, Category:Active WikiProjects had 687 entries; semi-active, 115, defunct and inactive, 53 and about 350, respectively, but in my personal experience, most WikiProjects classified as active are semi-active or worse.
If the above hypothesis are true, there are several quick conclusions. First, much of Wikipedia content is here because of WikiProjets. Second, WikiProjects exist thanks to only a small number of editors. Therefore, the project would benefit immensely if WikiProjects were even more active. Finally, even a small increase in WikiProject membership (one - two active editors for a project) would have a major impact on the content.
The following research questions should be asked:
) why are you / aren't you involved with a WikiProject
) what makes your WikiProject active / useful? (camaderie? tools?)
) what made you stop your involvement in a WikiProject? (I predict that the answer is most likely: there were not enough active editors to make it interesting)
) how to the most active WikiProject function? What are they, and what makes them successful?
The following are my suggestions to WMF/community at large. We should:
) recognize the positive role WikiProjects play in our community (support "WikiLove" towards WikiProjects); the goal here is to increase the editors willingness to be involved with WikiProject by making them feel proud/good about their involvement;
) develop tools to make them more efficient. A major function of WikiProjects is likely the fostering of camaraderie (ties) between editors of common interests, but there are also some tools that make projects useful beyond mere discussion boards. For example, Wikipedia:WikiProject Poland has lists of most popular articles, a matrix of quality vs. importance, article news feed, new article feed, cleanup lists, notes on Poland-related article naming conventions, dedicated awards, and other tools. Wikipedia:WikiProject Sociology promises support for sociology-instructors and professionals. There is surely more than can be done.
) aid WikiProject recruitment
editors active in topic areas should be encouraged to join a WikiProject, and post to the relevant (WikiProject) discussion page;
similar WikiProjects would benefit from merging, at least as far as having one centralized discussion place, so that they can reach the active level and look active to their members and newcomers; this is particularly relevant with regards to semi-active/inactive WikiProjects, which periodically attract an editor who looks around, sees no activity, and abandons them (critical mass of active editors is likely the key);
there is room to boost WikiProject membership through recruiting outside Wikipedia (professional organizations); thus synergy with outreach program.
the welcome and check out a related WikiProject of interest to you new editor welcome template I developed at User:Piotrus/w should be popularized; it is also an example that a useful tool can be easily developed (within minutes);
Of course, Wikipedia is also mostly a direct democracy, as most of the times we can make our own decisions. However, there are elements of representative democracy, as there are elected officials (administrators and arbitrators in particular) who are vested with the power to make certain decisions, and do not have to be concerned with the opinions of the people from outside their groups (see how much a non-admin opinion counts on WP:AE, or a non-arb at an arbitration case).
There are small but noticeable examples of charismatic authority (status of Jimbo), and benevolent dictatorship (Wikimedia Foundation can overrule community, and has done so, although this is exceedingly rare and reasonably limited to justified self-preservation cases such as "community cannot chose to disobey American law because this will result in Wikipedia servers being seized and the project closed").
How work is organized?
Wikipedia claims it is not a bureaucracy... what a joke. Wikipedia has tens of thousands of policy pages, from a manual of style, to policies on copyright and civil behavior.
McKeon, Viégas, and Wattenberg (2007) note that Wikipedia has “myriad guidelines, policies and rules” and “complex and bureaucratic processes [that run] counter to naïve depictions of Wikipedia as an anarchic space.” Viégas Wattenberg, Kriss, & van Ham (2007) also found that policy pages are growing on Wikipedia almost as quickly as content pages.
This is a problem: Clay Shirky notes: “Process is an embedded reaction to prior stupidity,” meaning “an organization slowly forms around avoiding the dumbest behaviors of its mediocre employees, resulting in layers of gunk that keep its best employees from doing interesting work.” Some editors (through only a small number, it appears) leave because they find the Wikipedia environment to bureaucratic.
The applicability of WP:IGNORE is an interesting question.
Spek, Postma, and Herik (2006) conclude that Wikipedia may be seen as an ultimate self-managing team. Benkler (2006, p. 104), in his discussion of peer-based commons-production model, notes that Wikipedia is the strongest example of a discourse-centric model of cooperation based on social norms. Sanger (2007) mentions Wikipedia's self-selecting membership. Viégas, Wattenberg, and McKeon (2007) use Ostrom’s theory of collective self-governance. Bruns (2008) discusses Wikipedia as a prime example of his ad-hoc meritocracy where produsers (content producers and end users) participate at will, limited only by their skills and interests. In my Konieczny (2010) I argue that Wikipedia has many qualities of an adhocracy, characterized by:
work organization rests on specialized teams
few barriers to enter or leave a team
decentralization / freedom from hierarchy
little formalization of behavior
roles not clearly defined
culture based on non-bureaucratic work ("getting the job done as quickly as efficiently as possible")
For decision making, that system appears to be a deliberative, direct democracy (with a few elements of a representative democracy)
For organization of work, the system appears to be a mix of bureaucracy and adhocracy
On how I moved from being an inclusionist to a deletionist
Attributed to me: Piotrus' Principle; If it's worth proving on the talk page, it's probably worth mentioning in the article. (Source: Rules by others,#199)