• This forum is strictly intended to be used by members of the VS Battles wiki. Please only register if you have an autoconfirmed account there, as otherwise your registration will be rejected. If you have already registered once, do not do so again, and contact Antvasima if you encounter any problems.

    For instructions regarding the exact procedure to sign up to this forum, please click here.
  • We need Patreon donations for this forum to have all of its running costs financially secured.

    Community members who help us out will receive badges that give them several different benefits, including the removal of all advertisements in this forum, but donations from non-members are also extremely appreciated.

    Please click here for further information, or here to directly visit our Patreon donations page.
  • Please click here for information about a large petition to help children in need.

Proposal for Off-Site Rules Revision

Status
Not open for further replies.

DarkGrath

Top Tiering Advisor in Liyue
She/Her
VS Battles
Administrator
Human Resources
4,457
5,544
Note: This thread is staff-only. You may request permission to reply from a staff member if you believe you will have productive input on the topic.

Introduction:



Hello. I am making this thread today to propose a revision to our off-site rules.

I would like to preface that, while this revision is obviously being made in the light of the recent controversy regarding Chasekilleen’s case on the Rules Violations Report Thread – a case in which I brought up some of the concerns with our off-site rules that I will express here – I do not wish for the discussion on this topic to be focused on how we should proceed with Chasekilleen’s case or for this discussion to be leveraged to attain a specific verdict in that case. This discussion is on the broader issues at play, and how we should address incidents of off-site misconduct in the future. With that out of the way, here is my proposal.


Foreword:



Before we can discuss the rules themselves, we need to address an underlying question – why do we have rules in the first place? Why don’t we just decide everything on a case-by-case basis, going against the rules when we feel they are wrong? This may seem pedantic, silly, or even offensive, but it’s quite important. I will leave this in a collapsible quote for the sake of not cluttering the OP, but I implore anyone intending to give an opinion on these proposals to read this to understand the lines of reasoning used thereafter.

Most often, when discussing matters where we should/shouldn’t punish a particular action, common reasoning is to say “It is against the rules, therefore we should punish it” or “It is not against the rules, therefore we shouldn’t punish it”. In essence, 'Kohlberg's conventional moral reasoning', for those familiar with the term. The implicit understanding is that the rules have some justified basis, whatever that basis may be, so actions which follow the rules are justified and actions that violate the rules are unjustified. This usually functions as a cohesive rationale, but it’s obviously completely unhelpful to this discussion. We aren’t questioning what is or is not against the rules – we are questioning what should be or shouldn’t be against the rules, which requires more complex, post-conventional reasoning about the justifiability of the basis of the rules themselves.

Almost everyone, whether they have put it into words or not, has an opinion on this – and almost everyone has a slightly different opinion on it. A truly comprehensive exploration of this question would require delving into moral philosophy to an extent that I am afraid would be a huge detraction from the intended topic of this thread. Rather than aiming to discuss this in depth, I merely want to offer one clear, unambiguous suggestion for the answer to this question that we can use as a framework – if only to ensure we are all on the same page.

My stance is this: rules are justified to the extent that they uphold universal principles that any reasonable individual, bound by those rules and with the understanding that they could possess the position of anyone bound by those rules, would agree are preferable to the alternative. In other words, our rules should be these principles, codified into a concrete format. For the sake of being less verbose in future references to this, I will refer to this standpoint as the ‘principle of preferability’.

This is somewhat obtuse, so let’s illustrate it with a practical example: the use of slurs. We do not allow the use of slurs towards other people on the wiki; it is against the rules. Why are they against the rules? Consider the fact that the alternative – a wiki where people are allowed to call each other slurs – would be substantially less preferable to a reasonable person than the version of the wiki we have now. A reasonable person would reject to partake in such a community because, even if it meant a restriction of their own natural liberties (in other words, a restriction of their ability to call other people slurs), they would prefer no one was able to call each other slurs to the alternative. Therefore, the use of slurs on the wiki are unjustifiable, and are made against the rules. Importantly, this means that the use of slurs isn’t unjustifiable simply because it is against the rules – it is unjustifiable because a reasonable person would willingly forgo their right to use slurs if it meant partaking in a community where people did not call each other slurs.

Those well-versed in moral and political philosophy may note that this stance is very similar to that of Rawls in the ‘Original Position’ thought experiment, where people have to decide what type of society they would choose to live in while having no concept of their ethnicity, gender, sexuality, socio-economic status, or otherwise. Variations of this notion – that rules are 'justified' to the people in a community if they follow universalised principles that all reasonable individuals would agree to if they had no concept of the position they would have in the community – is a common line of reasoning (if not the most common and well-regarded) in discussions on definitions of ‘justice’, and one I subscribe to. Hence why I advocate for this framework to understanding the implicit basis for our rules. Importantly, I do not want to needlessly expand on the totality of this stance; only the elements which are of relevance to our rules and our community, hence why the 'principle of preferability' is a simplified version of this notion.


The Problem:



Our current off-site rules regarding ‘irrelevant behaviour’ in section 2.7 of our Site Rules page state the following:
  • Off-site behavior is usually irrelevant except in cases of:
    • Actions that lead to the destabilization of the site (such as videos, forum posts, Discord chats, etc. that create drama), whether or not it was systematic. To determine what counts as destabilization of the site one should mostly look at the consequences of said act rather than the individual act itself.
    • Threatening someone off-site, be it a threat of violence, hacking, doxxing, sexual harassment, etc.
    • Harassment of users in their immediate surroundings (ex. Someone constantly messaging you with insulting comments via DMs or PMs)
    • Engaging in online criminal activity (Not including piracy).
    • Impersonating someone for malicious purposes.

I would argue that, as has been analogised in many previous discussion on our off-site rules, there are three implicit principles these rules follow:
1: Off-site misconduct is usually not in our jurisdiction, and therefore cannot be met with on-site punishments.

2: Off-site misconduct that is a threat to the stability of the wiki is in our jurisdiction, to the extent that we can punish these actions for their on-site impacts.

3: Cybercrime is not in our jurisdiction, but members found to have engaged in such activity must be removed out of legal obligation.

Interestingly, I don’t believe these three oft-cited principles completely encapsulate these rules, but I will touch on this later.

Ultimately, the intention behind these rules is obvious: we aren’t the internet police. We manage our own website and forum, and that’s that. Because we subscribe to that philosophy, the underlying principle behind all three aforementioned principles is this – ‘whatever an off-site act may be, if it is not illegal and it does not threaten the stability of our site, we shouldn’t take action on it’. I intend to argue, from the framework I have outlined above, that this principle is insufficient for our rules to be justified.

To illustrate this, let’s use a hypothetical example:
Two users have previously been involved in an abusive relationship. User A is someone who was abused by User B in the past. User A has been suffering from PTSD as a result of this abusive relationship, PTSD which significantly reduces their quality of life, and interactions with User B are triggers for this PTSD. User A joins the website, and later, User B joins the website. User B does not break any on-site rules in their interactions with User A (for instance, they do not directly harass User A). User B’s actions are ultimately acceptable on-site conduct in general, and the two no longer interact off-site. However, User B does persistently interact with User A on the website, and thereby knowingly triggers User A’s PTSD, lowering their quality of life substantially.

By our rules, we would have no basis on which to punish User B. User B would be permitted to continue engaging with the site and enabled to continue interacting with User A. However, I would argue this is in violation of the principle of preferability. A reasonable person, bound by these rules and with the awareness that they could have any position within this community (including the position of User A), would not want to partake in a community in which abused people are forced to interact with a person who has abused them in the past. It is a diminishment of one’s quality of life that far outweighs the diminishment of the quality of life of the abuser in the circumstance where this is punishable – as such, even if it means forgoing the liberty of interacting with someone if they were an abuser themselves (i.e.: if they were to have the position of User B), this would be the preferable alternative. The fact that our rules do not enable us to take action in such a circumstance shows there is an incongruence between our Site Rules and the principle of preferability, indicating that our site rules are unjust. This is the problem.


The Solution:



Now that we have established that the current state of the rules is the unpreferable alternative, we need to ask – what is the preferable alternative? To answer this, a good method is to explore why exactly a reasonable person would take moral objection to the former circumstance (the illustrative abusive relationship), or otherwise similar counterexamples to the state of our rules. And to this, I believe there are a couple of things we can point to.

1: In the former circumstance, User B’s interactions on-site – while not manifesting as direct threats – are just as harmful as direct threats would be to User A’s wellbeing. We obviously punish direct threats made by users on the wiki, and we can refer back to the principle of preferability to say why it should be that way. Because direct threats bring undue distress and reductions in wellbeing to people such that, even if we had to circumvent our own liberty to make threats ourselves, a reasonable person would choose to abide by that principle. The reason why we should take action against direct threats, then, is because the harm they cause is unjustifiable to the people subjected to it. When these interactions ultimately cause the same kind of harms as a direct threat, with the only difference being the content of the interaction (a morally arbitrary variable under the principle of preferability), we ultimately have not justified why we would consider one punishable and the other unpunishable.

2: Perhaps more pertinently, User B’s past off-site actions against User A implicate the potential for further harmful interactions in the future. Even if the relationship between User B and User A has ended, a sticking force comes with an imbalanced power dynamic – merely the fact that User B could leverage their interactions to cause further harms will likely instigate changes in User A’s behaviour out of a fear for their safety or wellbeing, potentially to the detriment of their physical or mental health. Referring to the principle of preferability, a reasonable person would not want to partake in a community in which they had cause to believe their safety and/or wellbeing was at risk due to the presence of another user, even if it meant they did not have the liberty to put the safety and/or wellbeing of others at risk. Importantly, this reasoning stands even if such cause does not ultimately manifest in directly harmful misconduct – the implicit threat of harmful misconduct is the ‘harm’ in itself.

If we understand that these are the kinds of circumstances in which taking action on the basis of off-site misconduct may be justified, this naturally leads into two further principles as an addendum:
1: Off-site misconduct is in our jurisdiction if it could reasonably cause undue harm and/or distress to a user in on-site interactions.

2: Off-site misconduct is in our jurisdiction if it could be reasonably construed as inconducive to the safety and/or wellbeing of a user, or a denomination of users, in on-site interactions.

As some of the more skeptical among you may find some rhetorical force in, these principles are tied back into their impacts for on-site interactions. The reasoning for this, following from what we have already explored, is simple: a matter which is performed off-site, and never concerns any of our users, cannot realistically result in an unjustifiable circumstance for any of our users, and therefore should not fall under our jurisdiction. Even if that action is something potentially objectionable, we still aren’t the internet police at the end of all this, and I am not suggesting we should be. Our concern is facilitating a community in which the rules are justifiable to our users, not with making other communities follow suit.

And remember how I said earlier that the three prior principles I noted in our current off-site rules don’t seem to fully encapsulate them? This would be a good time to elaborate on this by pointing to the oddball off-site rules we currently have.
  • Harassment of users in their immediate surroundings (ex. Someone constantly messaging you with insulting comments via DMs or PMs)
  • Impersonating someone for malicious purposes.

In almost every discussion about the off-site rules, we question whether off-site behaviour is destabilising or illegal, and follow the rules from there. My question is – do these rules always constitute illegal or destabilising behaviour? Certainly, depending on context, they could constitute it. Impersonating a high-ranking staff member on a Discord server for the sake of defaming them could be destabilising. Depending on what the ‘harassment’ constitutes, and the laws surrounding harassment in your region (particularly, depending on how developed online safety laws are), that could be done in an illegal manner. But by the current phrasing of our rules, we are already in our rights – in unusually specific circumstances, at that – to punish misconduct that isn’t clearly illegal or destabilising. I find this odd, considering how often discussions on off-site misconduct revolve around whether the behaviour was illegal or destabilising, with action almost always being rejected if neither can be established.

I bring this up to illustrate that I believe we already have a (very limited) manifestation of the principles I have proposed above. When we frame the rules we already have in the understanding that we should punish off-site misconduct that could cause undue harm/distress or make people feel unsafe in on-site interactions, these oddball rules make a bit more sense. Harassment and impersonation, even done off-site, could reasonably cause harm, distress, or the perception of a dangerous environment for the affected user in on-site interactions with the perpetrator. This is ultimately a minor point in the grand discussion, but for what it’s worth, I believe we already have some inkling of these principles in our rules as they are currently written. What I am suggesting is not revolutionary - it is just more consistent. It is an expansion to encapsulate the full principles that justify the rules, rather than only having rules for very particular examples of those principles. And that is the solution to the problem.


The Proposal:



With all of this considered, what is my proposal for the exact change to the phrasing of the rules?

For reference, these are the current off-site rules:
  • Off-site behavior is usually irrelevant except in cases of:
    • Actions that lead to the destabilization of the site (such as videos, forum posts, Discord chats, etc. that create drama), whether or not it was systematic. To determine what counts as destabilization of the site one should mostly look at the consequences of said act rather than the individual act itself.
    • Threatening someone off-site, be it a threat of violence, hacking, doxxing, sexual harassment, etc.
    • Harassment of users in their immediate surroundings (ex. Someone constantly messaging you with insulting comments via DMs or PMs)
    • Engaging in online criminal activity (Not including piracy).
    • Impersonating someone for malicious purposes.

And, with the aforementioned principles integrated (and with any redundancies created in the process removed), here is my proposal for the new phrasing of the rules:
  • Off-site behavior is usually irrelevant except in cases of:
    • Actions that lead to the destabilization of the site (such as videos, forum posts, Discord chats, etc. that create drama), whether or not it was systematic. To determine what counts as destabilization of the site one should mostly look at the consequences of said act rather than the individual act itself.
    • Actions taken against another user off-site of such a nature that could reasonably cause undue harm and/or distress for the other user in on-site interactions. This includes, but is not limited to: harassment, threats of violence or similar harmful actions, unsolicited sexual misconduct, impersonation, hacking, and doxing.
    • Actions made off-site that could be reasonably construed as inconducive to the safety and/or wellbeing of a user, or a denomination of users, in on-site interactions. This includes, but is not limited to: threats directed towards particular demographics (i.e.: racial, gendered, sexual, and/or religiously motivated threats to commit violent acts), obscenities of an implicative nature (i.e.: hate speech, sexual comments towards minors), and involvement with known hate groups.
    • Engaging in online criminal activity (Not including piracy).

While I believe the phrasing can speak for itself, I would like to offer clarity on the exact things these rules are/aren’t intended to encapsulate.
  • These rules are intended to encapsulate instances in which actions by a user off-site could cause undue harm/distress to another user in on-site interactions, such as targeted bullying of a user or non-consensual, sexually-connotated interactions. These rules aren’t intended to encapsulate instances of off-site bantering/teasing of a trivial or friendly nature, respectful criticisms, or feuds between friends.
  • These rules are intended to encapsulate instances in which actions by a user off-site could be reasonably construed as inconducive to the safety/wellbeing of another user, such as threatening violence against someone on the basis of their race, stating intention to mistreat a particular religious group, or partaking in a known LGBT hate group. These rules aren’t intended to encapsulate instances of saying the n-word in a casual server on Discord, partaking in taboo fetishes in a wholly private and consensual environment, or being a critic of a particular group of people with no harmful intentions towards them.
Ultimately, my problem with our off-site rules has revolved around one thing: the fact that we have shown we are reluctant to take actions against people for off-site misconduct, even when our reluctance is at the cost of the safety and wellbeing of our users. Even if there are concerns with any individual component of this proposal, my point - which I hope we can see eye-to-eye on - has been this: we should take necessary actions for the safety and wellbeing of our users that we have repeatedly and consistently not done on the basis of the rules as they currently are.

This proposal has been a very verbose way of justifying that concern in a concrete format, revising our rules to allow us to take necessary actions for the safety and wellbeing of our users, and to do so at minimal cost to the boundaries of our jurisdiction. I hope we will be able to reach a swift and reasonable resolution to this matter, and that any refinements or adjustments to these proposals can be handled comprehensively in our following discussion. Thank you for your time.
 
I do not disagree with the proposal per se, but I believe there are some elements that warrant consideration and specific (and common) hypotheticals that should be explored with regard to this new framework.

RVR disputes are messy, stressful, and time consuming by default, and only become moreso when the dispute revolves around elements that we do not have a definitive way to authenticate. I believe we open ourselves up to a lot of very muddy business in some scenarios. For instance, your PTSD example. It is easy to assume for the sake of the hypothetical that User B is an abusive person and that User A is an innocent and unwitting victim, but there are two sides to every story. If we put ourselves in the position of deciding that their past abusive relationship dictates whether or not User B can remain on the site, we now have to open ourselves up to adjudicating the true nature of their relationship should it be contested by User B, who may claim that in fact User A was the abusive one. Unlike the forum, we have no way of really knowing who is at fault, and given how challenging and often even emotional people get about relatively pedestrian forum disputes, us trying to sort out an abusive relationship off-site becomes impractical in my opinion.

I also believe that, aside from the practical difficulties, we are going to have to deal with some complicated decisions with regard to where exactly we draw the line. I will strongly caveat that I am against the use of slurs and that I am not saying this out of fear that someone might have some unsavory screenshots of me saying something horrible, I'm just not that kind of person and I do not find humor in it, but a lot of people do. There is a reason why the N word is sometimes referred to as the "gamer word." If you join an online community revolving around traditionally nerdy interests, you'll be subjected to a lot of immature shock-value humor where the offensiveness is what's funny about saying something, and this can be taken to extremes.

For those members of our community that are LGBT or part of an ethnic minority, viewed through the lens of undue harm or stress, or one's safety or wellbeing, are we now in the position of moderating all racist, homophobic, or transphobic sentiments expressed by members outside this forum? If we are, how are we going to handle that? What if it's a joke? What if that person claims to be a member of the community the slur is meant for and thus, has the right to say it? What if that person claims the screenshot is inauthentic?

I believe these are significant challenges for us.

I'll also say one more thing, and this is more confrontational that perhaps I'd like for it to be, but I think it does need to be said. When the Chase issue came up I saw an astounding level of input from staff members. I have not seen that many staff provide their input on an RVR matter perhaps since we banned Weekly. The vast majority of our moderators and admins do not regularly engage with RVR. Of those that do, it's mostly Bambu, Ant, you, me, Agnaa, DDM, and Dereck. This doesn't make them bad mods or something, but I believe there is a genuine risk of people with relatively no stake in the consequences of a certain proposal agreeing to it as a point of moral virtue with no genuine intention of dealing with the issues likely to arise as a result of it on a regular basis. I believe that should not be overlooked.
 
By our rules, we would have no basis on which to punish User B. User B would be permitted to continue engaging with the site and enabled to continue interacting with User A. However, I would argue this is in violation of the principle of preferability. A reasonable person, bound by these rules and with the awareness that they could have any position within this community (including the position of User A), would not want to partake in a community in which abused people are forced to interact with a person who has abused them in the past. It is a diminishment of one’s quality of life that far outweighs the diminishment of the quality of life of the abuser in the circumstance where this is punishable – as such, even if it means forgoing the liberty of interacting with someone if they were an abuser themselves (i.e.: if they were to have the position of User B), this would be the preferable alternative. The fact that our rules do not enable us to take action in such a circumstance shows there is an incongruence between our Site Rules and the principle of preferability, indicating that our site rules are unjust. This is the problem.
I don't know if I'd interpret this as anything short of harassment. The context for harassment is separate from what is traditional, but harassment it remains- it is aggressively pursuing a user against their wishes, even if it looks benign from the casual observer's perspective. A nitpick, to be sure, I just don't know what actionable example you could come up with here that would suit your needs.

And remember how I said earlier that the three prior principles I noted in our current off-site rules don’t seem to fully encapsulate them? This would be a good time to elaborate on this by pointing to the oddball off-site rules we currently have.

  • Harassment of users in their immediate surroundings (ex. Someone constantly messaging you with insulting comments via DMs or PMs)
  • Impersonating someone for malicious purposes.

In almost every discussion about the off-site rules, we question whether off-site behaviour is destabilising or illegal, and follow the rules from there. My question is – do these rules always constitute illegal or destabilising behaviour? Certainly, depending on context, they could constitute it. Impersonating a high-ranking staff member on a Discord server for the sake of defaming them could be destabilising. Depending on what the ‘harassment’ constitutes, and the laws surrounding harassment in your region (particularly, depending on how developed online safety laws are), that could be done in an illegal manner. But by the current phrasing of our rules, we are already in our rights – in unusually specific circumstances, at that – to punish misconduct that isn’t clearly illegal or destabilising. I find this odd, considering how often discussions on off-site misconduct revolve around whether the behaviour was illegal or destabilising, with action almost always being rejected if neither can be established.
I've often argued for offsite action to be taken in instances of harassment, regardless of the legality of the situation. The problem is the vagueness on what harassment really is, in many of these scenarios, where we often have fragments of the truth with which to work with. In an ideal scenario, where we can know the full truth, our rules afford us to remove individuals for harassment- in the instance of your first example, this would be so, and I would argue that with our current rules, we would be within our rights to handle it.

Off-site behavior is usually irrelevant except in cases of:
  • Actions that lead to the destabilization of the site (such as videos, forum posts, Discord chats, etc. that create drama), whether or not it was systematic. To determine what counts as destabilization of the site one should mostly look at the consequences of said act rather than the individual act itself.
  • Actions taken against another user off-site of such a nature that could reasonably cause undue harm and/or distress for the other user in on-site interactions. This includes, but is not limited to: harassment, threats of violence or similar harmful actions, unsolicited sexual misconduct, impersonation, hacking, and doxing.
  • Actions made off-site that could be reasonably construed as inconducive to the safety and/or wellbeing of a user, or a denomination of users, in on-site interactions. This includes, but is not limited to: threats directed towards particular demographics (i.e.: racial, gendered, sexual, and/or religiously motivated threats to commit violent acts), obscenities of an implicative nature (i.e.: hate speech, sexual comments towards minors), and involvement with known hate groups.
  • Engaging in online criminal activity (Not including piracy).
Despite some misgivings as I read on, I do agree with this adjustment. I do have concerns about what precisely is the limit of what we would pursue as a rule violation- if someone was very mean to another person three years ago, and then found them again on our Forum and reached out, would we ban that individual?- but the letter of the rule is generally acceptable to me.

It is no secret that I regard our offsite policies with a great deal of caution and hesitancy. It is my opinion that we should, in general, deal with as little outside our scope as is possible, and so I want our rules to reflect a minimalist approach to offsite behavior and actions we might take against it. In stating all of this, I am hoping to highlight that I see the importance of these adjustments, and that my concerns lie only in the fine details.
 
If I may,

Something to take into consideration is a point that some other staff made in the RVR: By the nature of off-site behavior, that would warrant an on-site report, the report is bound to be very controversial and multi-layered.

Like as part of the concerns @Deagonx had, he highlighted that there's usually more than one side to a story, and it may be a bit more complicated than User B abused User A.

With that in mind, one potential addition thats worth considering is making off-site reports the jurisdiction of Human Resources; similar to what we do with staff reports, or requesting that these reports are made to admins in PM who can then relay the report to the staff chat.

This way we can allow for proper investigation and discussion before a verdict is reached out in the wild.

(Something akin to a proposal @Bastolan27 made)
 
With that in mind, one potential addition thats worth considering is making off-site reports the jurisdiction of Human Resources; similar to what we do with staff reports, or requesting that these reports are made to admins in PM who can then relay the report to the staff chat.
I certainly don't agree with assigning all off-site reports to Human Resources, as they already have plenty to deal with and have very few members as it is. However, I do think that it is worth dealing with these reports away from the public eye.

In general I wonder to what extent we benefit vs what extent we suffer in terms of handling RVR matters in public in the first place. I understand the desire for transparency, but I believe we could still achieve an acceptable level of transparency in terms of clarifying what decisions were ultimately made and why, without necessarily having a public drama arena that is the constant source of headaches here.
 
Okay, the proposals here definitely come with a lot of nuance. Major props to Grath for being so thorough about this - especially with how sensitive of a subject this could potentially be.

I find myself in agreement with this proposed change. I fully understand that at the end of the day, we're not the Internet police, nor should we really try to act as such to a large degree. However, I also firmly believe that there are some more extreme, exceptional cases (and not just those involving illegal actions or those that threaten to destabilize the site) that we should be taking action on for the safety and well-being of our user base - as Grath herself has alluded to.

Bambu and Deagonx are also right in the fact that we need to exercise caution when it comes to applying this sort of thing, as it can get out of hand very quickly if mishandled. However, I think Grath has made it pretty clear what she believes would and wouldn't constitute rule violations. To illustrate my own opinion on the matter as well, I'd like to take a look at some examples brought forward by them and what I make of them. Feel free to respond with what you make of my takes on these as well.
RVR disputes are messy, stressful, and time consuming by default, and only become moreso when the dispute revolves around elements that we do not have a definitive way to authenticate. I believe we open ourselves up to a lot of very muddy business in some scenarios. For instance, your PTSD example. It is easy to assume for the sake of the hypothetical that User B is an abusive person and that User A is an innocent and unwitting victim, but there are two sides to every story. If we put ourselves in the position of deciding that their past abusive relationship dictates whether or not User B can remain on the site, we now have to open ourselves up to adjudicating the true nature of their relationship should it be contested by User B, who may claim that in fact User A was the abusive one. Unlike the forum, we have no way of really knowing who is at fault, and given how challenging and often even emotional people get about relatively pedestrian forum disputes, us trying to sort out an abusive relationship off-site becomes impractical in my opinion.
I can see the concern here, especially given that in cases that are less black-and-white, this can easily devolve into a "he said, she said" mess where the truth of the matter becomes a lot more blurred. However, I've seen a few cases like this elsewhere (as in, not related to the site), and I believe there are a few things worth paying attention to in cases like this. Full disclosure, I understand these aren't ironclad, but I believe they work well from my experience:
  • How much support does each side of the situation get? Very frequently, the actual victim in this situation receives considerably more support (such as from, say, a circle of friends that both people were in) than the abuser, even if the abuser claims to be the victim
  • How strong is the evidence on either side? The actual victim will often have much more compelling evidence in their favor, such as actual screenshots and the like. The abuser pretending to be the victim will often lean more on "he said, she said" rhetoric or forged evidence to compensate for the fact that they're knowingly in the wrong
  • To add on to the first bullet point, are there any particular accounts from people knowledgeable on the situation? If so, who do they favor?
I believe stuff like this could be helpful in ascertaining the truth of the matter in these sorts of situations
I also believe that, aside from the practical difficulties, we are going to have to deal with some complicated decisions with regard to where exactly we draw the line. I will strongly caveat that I am against the use of slurs and that I am not saying this out of fear that someone might have some unsavory screenshots of me saying something horrible, I'm just not that kind of person and I do not find humor in it, but a lot of people do. There is a reason why the N word is sometimes referred to as the "gamer word." If you join an online community revolving around traditionally nerdy interests, you'll be subjected to a lot of immature shock-value humor where the offensiveness is what's funny about saying something, and this can be taken to extremes.

For those members of our community that are LGBT or part of an ethnic minority, viewed through the lens of undue harm or stress, or one's safety or wellbeing, are we now in the position of moderating all racist, homophobic, or transphobic sentiments expressed by members outside this forum? If we are, how are we going to handle that? What if it's a joke? What if that person claims to be a member of the community the slur is meant for and thus, has the right to say it? What if that person claims the screenshot is inauthentic?
The real answer to this is that there's no one solution that's everyone's gonna agree with, though I believe Grath mentioned her take on this:

These rules are intended to encapsulate instances in which actions by a user off-site could be reasonably construed as inconducive to the safety/wellbeing of another user, such as threatening violence against someone on the basis of their race, stating intention to mistreat a particular religious group, or partaking in a known LGBT hate group. These rules aren’t intended to encapsulate instances of saying the n-word in a casual server on Discord, partaking in taboo fetishes in a wholly private and consensual environment, or being a critic of a particular group of people with no harmful intentions towards them.

Personally, I'm more in agreement with this specification, though perhaps a bit stricter in terms of the things disallowed (such as, for example [and heads up in advance, as I'm gonna allude to some heavy stuff including discrimination against minority groups] "jokes" that carry heavily bigoted sentiments like those involving trans people's suicide rates, slavery "jokes" about people of color, etc., especially if these are directed toward a specific user)

It'd probably be best to specify the sorts of things that do and don't qualify for legitimate off-site reports, so we can lessen the risk of a bunch of ill-intended reports of people that some may personally dislike.
I do have concerns about what precisely is the limit of what we would pursue as a rule violation- if someone was very mean to another person three years ago, and then found them again on our Forum and reached out, would we ban that individual?- but the letter of the rule is generally acceptable to me.
My take on this example given, personally, is that it depends on what the person's doing in their attempt to reach out. If they're trying to engage in more harassment and followed this other person to our Forum to do that, for example, we should definitely ban them. If they're trying to reach out to just talk normally, it'd depend on if the other person is comfortable with it (if they're not, we'd need to tell them to refrain from speaking with this other person, kinda similar to Grath's example at the end of her "The Problem" section). If they're trying to reach out in an attempt to make amends, we definitely shouldn't, though we should also still be mindful of whether or not this other person is comfortable with speaking with them.
If I may,

Something to take into consideration is a point that some other staff made in the RVR: By the nature of off-site behavior, that would warrant an on-site report, the report is bound to be very controversial and multi-layered.

Like as part of the concerns @Deagonx had, he highlighted that there's usually more than one side to a story, and it may be a bit more complicated than User B abused User A.

With that in mind, one potential addition thats worth considering is making off-site reports the jurisdiction of Human Resources; similar to what we do with staff reports, or requesting that these reports are made to admins in PM who can then relay the report to the staff chat.

This way we can allow for proper investigation and discussion before a verdict is reached out in the wild.

(Something akin to a proposal @Bastolan27 made)
From what I understand, HR is already tied up as is, so I don't think this would be very practical - in fact, it may only serve to strain HR even more.

And that about sums up my take on all this. I am very much in agreement with Grath's proposal, but I'm also mindful of the responsibility we'd need to bear by implementing this.
 
Last edited:
I would argue that, as has been analogised in many previous discussion on our off-site rules, there are three implicit principles these rules follow:
I don't know if I've mentioned this before, but I identify different principles to those. Simply, off-site conduct is not our jurisdiction unless it involves the action or significant risk of direct harm to one of our users, or the site as a whole. I'd emphasize "significant risk" and "direct harm" since we try to have a higher bar for this sort of thing once it goes off-site.
By our rules, we would have no basis on which to punish User B. User B would be permitted to continue engaging with the site and enabled to continue interacting with User A. However, I would argue this is in violation of the principle of preferability. A reasonable person, bound by these rules and with the awareness that they could have any position within this community (including the position of User A), would not want to partake in a community in which abused people are forced to interact with a person who has abused them in the past. It is a diminishment of one’s quality of life that far outweighs the diminishment of the quality of life of the abuser in the circumstance where this is punishable – as such, even if it means forgoing the liberty of interacting with someone if they were an abuser themselves (i.e.: if they were to have the position of User B), this would be the preferable alternative. The fact that our rules do not enable us to take action in such a circumstance shows there is an incongruence between our Site Rules and the principle of preferability, indicating that our site rules are unjust. This is the problem.
For one thing, I think we would be able to take action in that case. Most of the stuff you're talking about here happens on-site; if User A asks for User B to stop interacting with them, yet User B continues persistently doing so, then that would be grounds for User B to be banned.

For another thing, part of the issue with off-site stuff is that it's incredibly difficult to litigate. I've been in some communities where issues of abusers following their victims have come up, and it's a total clusterfuck. Because in those cases, it's not exactly rare for the abuser to pretend to be the abused, to kick User A out of such a community. And often the abuse is something that is wholly unverifiable either way, due to taking place in-person, or over now-deleted phone calls, or through easily-forgeable emails. And I hate to say it, but a lot of abusive situations provoke abusive responses, which could implicate even the victim if presented selectively.

This sort of thing is the reason why Fandom doesn't bother itself with that.
1: In the former circumstance, User B’s interactions on-site – while not manifesting as direct threats – are just as harmful as direct threats would be to User A’s wellbeing. We obviously punish direct threats made by users on the wiki, and we can refer back to the principle of preferability to say why it should be that way. Because direct threats bring undue distress and reductions in wellbeing to people such that, even if we had to circumvent our own liberty to make threats ourselves, a reasonable person would choose to abide by that principle. The reason why we should take action against direct threats, then, is because the harm they cause is unjustifiable to the people subjected to it. When these interactions ultimately cause the same kind of harms as a direct threat, with the only difference being the content of the interaction (a morally arbitrary variable under the principle of preferability), we ultimately have not justified why we would consider one punishable and the other unpunishable.
I really really really don't like this road of litigating what "reasonable people" would find "upsetting". This sort of thing can easily go out of wack if we enshrine it.
"Oh VSBW? That's the website where they say it's not reasonable to be upset by someone spewing the n-word constantly, right? Even when a black person told them how much it upset them to be in a community with such racists, they just said that wasn't reasonable!"
I see such a situation as reasonably likely to come up in the next 2 years if we take this route.
In almost every discussion about the off-site rules, we question whether off-site behaviour is destabilising or illegal, and follow the rules from there. My question is – do these rules always constitute illegal or destabilising behaviour? Certainly, depending on context, they could constitute it. Impersonating a high-ranking staff member on a Discord server for the sake of defaming them could be destabilising. Depending on what the ‘harassment’ constitutes, and the laws surrounding harassment in your region (particularly, depending on how developed online safety laws are), that could be done in an illegal manner. But by the current phrasing of our rules, we are already in our rights – in unusually specific circumstances, at that – to punish misconduct that isn’t clearly illegal or destabilising. I find this odd, considering how often discussions on off-site misconduct revolve around whether the behaviour was illegal or destabilising, with action almost always being rejected if neither can be established.
I don't think we do? In fact, I see whether it happened in a user's immediate surroundings come up more often than whether it's "destabilising".
I bring this up to illustrate that I believe we already have a (very limited) manifestation of the principles I have proposed above. When we frame the rules we already have in the understanding that we should punish off-site misconduct that could cause undue harm/distress or make people feel unsafe in on-site interactions, these oddball rules make a bit more sense. Harassment and impersonation, even done off-site, could reasonably cause harm, distress, or the perception of a dangerous environment for the affected user in on-site interactions with the perpetrator. This is ultimately a minor point in the grand discussion, but for what it’s worth, I believe we already have some inkling of these principles in our rules as they are currently written. What I am suggesting is not revolutionary - it is just more consistent. It is an expansion to encapsulate the full principles that justify the rules, rather than only having rules for very particular examples of those principles. And that is the solution to the problem.
My main reason for liking the status quo, and not so much a further move, is that the current rules cover cases that are literally inescapable.

You can't just avoid someone sending harassing DMs to you; that has already reached your knowledge. You can't just avoid someone impersonating you maliciously, their malicious acts will get those events inexorably brought up to you.

But someone saying something bad about you in a random Discord that someone screenshotted 6 months later because of a beef over a CRT? You didn't know about that for six months. Someone saying something bigoted on their Twitter that someone eventually dug up 9 months after they said it? No-one knew about it until then because it was so out of the way.

Not to mention the risk of blackmail this sort of thing creates; if a user has such damning screenshots about a staff member that they'd get permanently banned upon them being revealed, there could be some major issues.

And so, I think the changes you're suggesting represent a notable step away from that, not just a slight extension of an existing principle. The principle ended there for a good reason.

And so, I oppose your suggested changes.
 
Last edited:
I largely find DarkGrath's suggestions reasonable, although mentions of extreme bigotry towards disabled people seems to have been omitted from the examples.

I do think that it is a very valid notion to require more controversial and public knee-jerk reactive "court of public opinion without any information verification" destructive drama-inducing reports to be made in private to our staff, preferably our administrators, but after the matters have been reported, our much smaller HR group should only evaluate the cases on their own if the said cases are especially sensitive, due to time constraints.

Our HR group members are also free to carefully evaluate and suggest new additions to their group, if they need more help.
 
I do agree with Agnaa about that it is very hard for us to evaluate exactly what truly happened in potential off-site offenses though.
 
What do you all think about this?
Part of me is worried that this will lead to a witch-hunt against staff members and other users if not enforced carefully. Don't like a user or their evaluation on your thread? Dig up something from their past said on another forum or chat group and get them banned for it.
 
To throw in an additional aspect to consider regarding the judgement of off-site behaviour: Posting chat conversations without the consent of all involved parties violates their right of data autonomy. Depending on the country the person is citizen in, it may in fact outright be illegal and anyone who then shares the scans would subsequently engage in illegal behaviour. I, living in Germany, may for instance not do so.
Now, morally, it is a matter of right vs right.
However, from a moral perspective, it is worth asking: What if someone posts chats and it turns out to not actually be a bannable offence? That arguably becomes a very on-site violation of privacy without sufficient reason then.
 
To throw in an additional aspect to consider regarding the judgement of off-site behaviour: Posting chat conversations without the consent of all involved parties violates their right of data autonomy. Depending on the country the person is citizen in, it may in fact outright be illegal and anyone who then shares the scans would subsequently engage in illegal behaviour. I, living in Germany, may for instance not do so.
Now, morally, it is a matter of right vs right.
However, from a moral perspective, it is worth asking: What if someone posts chats and it turns out to not actually be a bannable offence? That arguably becomes a very on-site violation of privacy without sufficient reason then.
That doesn't feel too relevant to this change, since such concerns seem more likely to come up with instances of direct harassment.
 
Part of me is worried that this will lead to a witch-hunt against staff members and other users if not enforced carefully. Don't like a user or their evaluation on your thread? Dig up something from their past said on another forum or chat group and get them banned for it.
The thing is, though, that I believe Grath’s made it clear what kinds of things qualify vs. what doesn’t qualify for a legitimate report. And to call back to her “principle of preferability,” the alternative is to let genuinely problematic individuals off with no consequences just because it’s off-site - an alternative that I don’t find to be preferable
 
That doesn't feel too relevant to this change, since such concerns seem more likely to come up with instances of direct harassment.
Maybe, just wanted to throw it into the room.
Anyway, my main reason for opposing this would be how much trouble and drama it is likely to cause, as well as that I don't like to poke around in people's private business, but I probably won't be involved in dealing with it. So I suppose I'm neutral.
 
What do you all think about this?
I fully agree DarkGrath's suggested changes.

I do think that it is a very valid concern that bad actors may attempt to weaponize the RVRT against users they personally dislike and waste our staff’s time with reports lacking in substance, so some precautionary measures to dissuade witch-hunts are worth considering.

On another note, it seems inadvisable for very controversial reports to continue to be made in public (given how Dread’s recent report went). I personally agree with the suggestion to have them handled privately.
 
I fully agree DarkGrath's suggested changes.

I do think that it is a very valid concern that bad actors may attempt to weaponize the RVRT against users they personally dislike and waste our staff’s time with reports lacking in substance, so some precautionary measures to dissuade witch-hunts are worth considering.

On another note, it seems inadvisable for very controversial reports to continue to be made in public (given how Dread’s recent report went). I personally agree with the suggestion to have them handled privately.
I share similar thoughts.
 
I overall think the OP is done really well, but I will say I share some of Deagon, Bambu, and Agnaa's concerns.

I do strongly believe that the community is a far better place if anyone who is any of the following; racist, sexist, homophobic, transphobic, ageist, ableist, Christophobic, Islamophobic, anti-Semantic, Hinduphobic, Shintophobic, ect were all removed from the platform. But at the same time, it is extremely easy to take every out of context statement one can see to weaponize it against someone. Lots of people say the N word or the R word for out of context reasons with the former just being "Gamer word" or "A rapper's greeting" and the latter being used as an adjective for an inanimate object rather than describing a person. I again prefer not to be hearing those words at all, but I don't think we can just machine gun ban everyone who says them out of context offsite or if people screenshot and try to report them.

It's also not our job to enforce political or religious views on others or force others out of their own. Though, there is a difference between having beliefs prone to controversy and being literal extremists. A simple belief that "All forms of unorthodox sexual activities are morally concerning" is indeed a controversial religious view, but it's not bigotry. If a workplace manager used that belief to intentionally fire employees by association alone, that would be bigotry. Or worse, any kind of genocide support is definitely severe bigotry that would absolutely be condemnable. But back to square one, some people could just report someone for mentioning a personal religious view and attempt to accuse some nonjudgmental person of homophobia; when the reality is the person doing the reporting could actually be the one guilty of bigotry in the form of onsite Christophobia. Which one could make a case for punishment instead.

I most definitely think banning people for ridiculous reasons such as petty schoolyard insults would just be downright outrageous. I also share thoughts with Maverick controversial reports such as Dread should be taken up privately to staff. It is against the rules to report staff members as oppose to getting into contact with Bureaucrats and/or HR group members. Though given that HR groups are only a handful of people and half of them are barely even active these days, I think informing other Admins and those higher than Admins could also suffice and they can perhaps deliver the message with other staff. Such as the case with the Dread report, should be kept private instead of causing giant uproars in the RVRT. It is noted, that just because someone is permanently banned does not make it alright to going on witch hunts or engage in mob harassment against them.

Also, I think we should primarily focus on events going forward; it's hard to condemn someone for things done in the past; especially if they were done well before the new and refreshed policies existed. And furthermore, people do change. I don't think someone who used to have bigoted views to an extent or was a former bully to people in the past should be compared to those who still are those things. It's pretty common for people to be edgy kids and teenagers who cause trouble where ever they go in the past to eventually mature to the point of becoming inspirational figures. We sometimes lift the bans of people who used to be banned years due to growing changes like this. Though severity, context, and evident signs of improvement if any would be taken into account.

But in conclusion, I have overall mixed thoughts on the additions but lean towards thinking they are good albeit some rewording and adding details that take some opposing points of Agnaa's into account may be beneficial.
 
Last edited:
Part of me is worried that this will lead to a witch-hunt against staff members and other users if not enforced carefully. Don't like a user or their evaluation on your thread? Dig up something from their past said on another forum or chat group and get them banned for it.
We should probably include some wording to prevent this from occurring. For example, I used to be severely mentally ill many years ago, largely due to being prescribed far too high doses of very destructive mind-altering medicines, and have gradually recovered more and more afterwards.

One early part of that extremely traumatic and prolonged ordeal was that I tried to process all of my negative input at the time in a very disturbing story heavily featuring psychopathic and sociopathic mindsets (meaning that many of the characters were extremely evil people, and readers were shown quite indepth how they tbought and acted), and later ones were doing some very unhinged, paranoid, and irrational edits in TV Tropes and Wikipedia, and believed unsympathetic prejudice fed to me from unreliable propagandistic news sources, and was unfiltered when talking about my extreme anxiety from that misleading information.

My point is that a large part of us do make a great effort to learn and evolve for the better over time.
 
Last edited:
I overall think the OP is done really well, but I will say I share some of Deagon, Bambu, and Agnaa's concerns.

I do strongly believe that the community is a far better place if anyone who is any of the following; racist, sexist, homophobic, transphobic, ageist, ableist, Christophobic, Islamophobic, anti-Semantic, Hinduphobic, Shintophobic, ect were all removed from the platform. But at the same time, it is extremely easy to take every out of context statement one can see to weaponize it against someone. Lots of people say the N word or the R word for out of context reasons with the former just being "Gamer word" or "A rapper's greeting" and the latter being used as an adjective for an inanimate object rather than describing a person. I again prefer not to be hearing those words at all, but I don't think we can just machine gun ban everyone who says them out of context offsite or if people screenshot and try to report them.

It's also not our job to enforce political or religious views on others or force others out of their own. Though, there is a difference between having beliefs prone to controversy and being literal extremists. A simple belief that "All forms of unorthodox sexual activities are morally concerning" is indeed a controversial religious view, but it's not bigotry. If a workplace manager used that belief to intentionally fire employees by association alone, that would be bigotry. Or worse, any kind of genocide support is definitely severe bigotry that absolutely be condemnable. But back to square one, some people could just report someone for mentioning a personal religious view and attempt to accuse some nonjudgmental person of homophobia; when the reality is the person doing the reporting could actually be the one guilty bigotry in the form of onsite Christophobia. Which one could make a case for punishment instead.

I most definitely think banning people for ridiculous reasons such as petty schoolyard insults would just be downright outrageous. I also share thoughts with Maverick controversial reports such as Dread should be taken up privately to staff. It is against the rules to report staff members as oppose to getting into contact with Bureaucrats and/or HR group members. Though given that HR groups are only a handful of people and half of them are barely even active these days, I think informing other Admins and those higher than Admins could also suffice and they can perhaps deliver the message with other staff. Such as the case with the Dread report, should be kept private instead of causing giant uproars in the RVRT. It is noted, that just because someone is permanently banned does not make it alright to going on witch hunts or engage in mob harassment against them.

Also, I think we should primarily focus on events going forward; it's hard to condemn someone for things done in the past; especially if they were done well before the new and refreshed policies existed. And furthermore, people do change. I don't think someone who used to have bigoted views to an extent or was a former bully to people in the past should be compared to those who still are those things. It's pretty common for people to be edgy kids and teenagers who cause trouble where ever they go in the past to eventually mature to the point of becoming inspirational figures. We sometimes lift the bans of people who used to be banned years due to growing changes like this. Though severity, context, and evident signs of improvement if any would be taken into account.

But in conclusion, I have overall mixed thoughts on the additions but lean towards thinking they are good albeit some rewording and adding details that take some opposing points of Agnaa's into account may be beneficial.
I largely agree with Medeus' take on this. I think that we should make clear that our standards for off-site behaviour require considerably more extreme acts than on-site for us to take action, and that our members should not suddenly air inflammatory drama-inducing dirty laundry in public.

I am also not sure how DontTalkDT's concern about the legality of providing evidence affects us.
 
I don't think DT's concern matters too much for the site as a whole. At worst, it would just mean that people from certain countries wouldn't be able to present information on certain offences.
 
Okay. Perhaps I should ask ChatGPT4 to investigate regarding in which countries it is forbidden?

Also, more specifically, does this only mean genuinely sensitive private conversations between two people alone, or talking about non-sensitive issues among many other people in a Discord server, or something in-between?
 
I have now asked ChatGPT4, and will send the information in private to DarkGrath, in order for her to be able to reevaluate how we should handle this issue. 🙏
 
Also, more specifically, does this only mean genuinely sensitive private conversations between two people alone, or talking about non-sensitive issues among many other people in a Discord server, or something in-between?
In Germany basically anything that is invite only would be considered private, including group chats with dozens of people. Apparently that's frequently relevant when content of employee group chats is leaked to an employer. (in which case the employer is obligated to ignore it)
Can't comment on any other country.
 
Thank you all for your input on this. I was hopeful that a complicated and important topic such as this one would receive proper consideration from diverse perspectives, and I've not been disappointed.

There is a lot worth responding to, but as I've been asked directly about this matter, I'd like to address the questions on how this would fit with data privacy laws before I respond to anything else.

To address the elephant in the room - we already do have a section in our off-site rules regarding illegal conduct being impermissible, and this section is not being removed or changed in any way in this revision.
  • Off-site behavior is usually irrelevant except in cases of:
    • Engaging in online criminal activity (Not including piracy).

Needless to say, this means that if someone violates data privacy laws in their region for the purpose of a report, that user will be punished accordingly. This is already the case, and this will continue to be the case. This leads me to consider this line of questioning a digression from the topic of the thread. However, I can see the concern this brings up - what if someone breaks data privacy laws to inform us of a very serious offense? Or, perhaps more aptly, what if someone breaks data privacy laws to inform us of a different crime? Would we just punish both of them for their offenses? Would we reject to punish someone on the basis of illegally obtained information? Neither solution is very pretty, and if our answer is "sometimes one, sometimes the other", we should establish when it is one or the other. And as our off-site rules do not currently elaborate on how we would handle such circumstances, this is a hanging issue in our rules that this thread will not resolve.

I have certain ideas on how we could handle this, and I don't deny that this discussion is important. But I also don't believe we need to handle it here. Preferably, if anyone has a strong, reasoned perspective on the matter that they believe can and should be codified in a particular way, they should make a separate thread to discuss the topic. If not, I would be willing to offer my ideas on the topic on a separate revision thread at a later date.
 
Thank you very much for your help, DarkGrath. I trust your sense of judgement regarding this. 🙏
 
I also believe that, aside from the practical difficulties, we are going to have to deal with some complicated decisions with regard to where exactly we draw the line. I will strongly caveat that I am against the use of slurs and that I am not saying this out of fear that someone might have some unsavory screenshots of me saying something horrible, I'm just not that kind of person and I do not find humor in it, but a lot of people do. There is a reason why the N word is sometimes referred to as the "gamer word." If you join an online community revolving around traditionally nerdy interests, you'll be subjected to a lot of immature shock-value humor where the offensiveness is what's funny about saying something, and this can be taken to extremes.
I do strongly believe that the community is a far better place if anyone who is any of the following; racist, sexist, homophobic, transphobic, ageist, ableist, Christophobic, Islamophobic, anti-Semantic, Hinduphobic, Shintophobic, ect were all removed from the platform. But at the same time, it is extremely easy to take every out of context statement one can see to weaponize it against someone. Lots of people say the N word or the R word for out of context reasons with the former just being "Gamer word" or "A rapper's greeting" and the latter being used as an adjective for an inanimate object rather than describing a person. I again prefer not to be hearing those words at all, but I don't think we can just machine gun ban everyone who says them out of context offsite or if people screenshot and try to report them.
I intend no disrespect from my immediate tone, and if it comes off like that, then I sincerely apologize, but this has got to be the worst point I have ever read in my entire life.

If somebody pops out and calls somebody a f****t on any website connected or disconnected from this website, they will be banned in microseconds. On-site, off-site, in their minds, they're gone. Brandished as a homophobe throughout the internet forever.
If somebody comes and calls somebody a r****d or say that they're r******d, another ban.

But now a word that people have been banned for being used on site and even just quoting is now going to get leniency because it's now a "gamer word"? And since because it's an innocent and common thing on the internet, it gets a pass?
You know what else is common on the internet? Homophobia. I see it everywhere. We would excommunicate somebody in a heartbeat for something of that nature.
But we can't uphold the standards for headassery in that department. This is crazy
 
now going to get leniency because it's now a "gamer word"? And since because it's an innocent and common thing on the internet, it gets a pass?
You have misunderstood me. I am not suggesting that it's innocent or that we should grant leniency. I am not downplaying the significance of these actions, I am pointing out that they are common and that trying to police them outside of the forum is going to present a significant practical challenge, not only in terms of the volume of offenses we are certain to deal with, but beyond that the more nuanced situations where a word may be used.

This is in addition to the limited verifiability of off-site conduct, and the potential to fake such a word being used by an opponent in order to get them banned.
 
You have misunderstood me. I am not suggesting that it's innocent or that we should grant leniency. I am not downplaying the significance of these actions, I am pointing out that they are common and that trying to police them outside of the forum is going to present a significant practical challenge, not only in terms of the volume of offenses we are certain to deal with, but beyond that the more nuanced situations where a word may be used.

This is in addition to the limited verifiability of off-site conduct, and the potential to fake such a word being used by an opponent in order to get them banned.
All I read is that we can let it slide because we don't wanna ban too many people and there's more unique contexts (there really isn't) that could mean otheerwise
 
there's more unique contexts (there really isn't) that could mean otheerwise
Okay, so in your view, would we ban users for using slurs off-site that refer to a community they are a part of? For instance, a gay person using the f slur or a black person using the n word? Are we putting ourselves in the position of saying that this is not allowed and we will ban you if we can confirm you said any slur anywhere on the internet?
 
Okay, so in your view, would we ban users for using slurs off-site that refer to a community they are a part of? For instance, a gay person using the f slur or a black person using the n word? Are we putting ourselves in the position of saying that this is not allowed and we will ban you if we can confirm you said any slur anywhere on the internet?
If a straight man says the F word offsite and that is screenshotted, they're gone
If a white man says the N word offsite and that is screenshotted, then excuses
 
If a straight man says the F word offsite and that is screenshotted, they're gone
If a white man says the N word offsite and that is screenshotted, then excuses
Right, and this is part of the issue that I have. I do not know the ethnicity or sexual orientation of nearly any of our members and I do not want to be put in the position of asking them when they are reported for saying a slur off site.

For that and the other reasons I mentioned, I don't believe this is a good idea. I agree that the use of slurs is detestable, but I believe it is an impractical task to take on ourselves.
 
Given the already extensive deviation to discuss moral philosophy, I would prefer to limit deviations to discuss linguistics. However, I think there is an important aspect of linguistics to acknowledge here.

All language is context. Language is about how we ascribe meaning to what we say, and the meaning of anything we say is dependent on its usage, which is dependent on its context. This includes the use of offensive language, and it's implicitly why we respond very differently to, say, people of different ethnicities using the n-word.

The n-word is offensive because it is used as a derogatory term for the purpose of offending and demeaning black people on the basis of their race. It is not a principle of the universe that it carries this connotation - it carries this connotation because it is used as such. So when, for example, a black person uses the n-word, we can infer it's probably not being used to offend and demean, and therefore has a different meaning in its context.

We have to ban the use of the n-word on-site, whatever convictions we may have about it, because of FANDOM regulations. They're a business who owns the service that we're hosting on, and they say they don't want the n-word on their site, so that's that. But when we are dealing with an off-site matter, we have the freedom and capacity of reason to acknowledge that, for example, someone using the n-word to belittle and intimidate someone else and someone using the n-word as an exclamation directed to nobody with no harmful intent might warrant being evaluated differently. And I believe the proposals already encompass this well enough. Not because they directly address exactly what contexts every instance of offensive language would/wouldn't be punishable in, but because they address when the kinds of problems may be caused (whether caused by offensive language or otherwise) that should warrant us taking action.

Any attempt to integrate the banning of specific words off-site would almost inevitably result in rule violations reports being made for things that aren't really worth the trouble.
 
Right, and this is part of the issue that I have. I do not know the ethnicity or sexual orientation of nearly any of our members and I do not want to be put in the position of asking them when they are reported for saying a slur off site.

For that and the other reasons I mentioned, I don't believe this is a good idea. I agree that the use of slurs is detestable, but I believe it is an impractical task to take on ourselves.
Given the already extensive deviation to discuss moral philosophy, I would prefer to limit deviations to discuss linguistics. However, I think there is an important aspect of linguistics to acknowledge here.

All language is context. Language is about how we ascribe meaning to what we say, and the meaning of anything we say is dependent on its usage, which is dependent on its context. This includes the use of offensive language, and it's implicitly why we respond very differently to, say, people of different ethnicities using the n-word.

The n-word is offensive because it is used as a derogatory term for the purpose of offending and demeaning black people on the basis of their race. It is not a principle of the universe that it carries this connotation - it carries this connotation because it is used as such. So when, for example, a black person uses the n-word, we can infer it's probably not being used to offend and demean, and therefore has a different meaning in its context.

We have to ban the use of the n-word on-site, whatever convictions we may have about it, because of FANDOM regulations. They're a business who owns the service that we're hosting on, and they say they don't want the n-word on their site, so that's that. But when we are dealing with an off-site matter, we have the freedom and capacity of reason to acknowledge that, for example, someone using the n-word to belittle and intimidate someone else and someone using the n-word as an exclamation directed to nobody with no harmful intent might warrant being evaluated differently. And I believe the proposals already encompass this well enough. Not because they directly address exactly what contexts every instance of offensive language would/wouldn't be punishable in, but because they address when the kinds of problems may be caused (whether caused by offensive language or otherwise) that should warrant us taking action.

Any attempt to integrate the banning of specific words off-site would almost inevitably result in rule violations reports being made for things that aren't really worth the trouble.
I agree with the sentiment of both of those. I don't think we have the practical ability (and probably not right either) to gather the data to make such distinctions and I also don't believe in punishing the use of words regardless of context.
 
Well, I definitely think that people who are not dark brown and who use the n-word with hard r in a derogatory manner should be banned on sight, but I am not the best person to evaluate how we are supposed to practically properly investigate this.

I also definitely do not want to punish our members for supplying reliable evidence of extremely bad conduct off-site. I am just concerned about how we can evaluate this in practice. It seems insane that it may be illegal to just supply evidence of outright criminal behaviour, for example.

Can somebody investigate more indepth and tell me, DarkGrath, and DontTalk in private please?
 
Well, I definitely think that people who are not dark brown and who use the n-word with hard r in a derogatory manner should be banned on sight, but I am not the best person to evaluate how we are supposed to practically properly investigate this.

I also definitely do not want to punish our members for supplying reliable evidence of extremely bad conduct off-site. I am just concerned about how we can evaluate this in practice. It seems insane that it may be illegal to just supply evidence of outright criminal behaviour, for example.
The legality would depend on which country's laws you're looking at. For instance, it is not illegal in the United States to share the contents of a textual exchange, but in some states it is illegal to record a voice conversation without another party's consent.

I believe we should focus less on the legality aspect. It's impractical to ensure we are adhering to the laws of every country and province in the world, and we would not pardon a user for a terrible action just because they told us it was legal in their country.

Our decision should revolve around what we think is best for the community, counter balanced by what is practical with our existing resources. I share your sentiment about racial slurs, but I just don't think its really practical and I believe we are putting ourselves in an untenable position by trying to regulate all off-site behavior regardless of its nexus to our community.
 
Well, I definitely think that people who are not dark brown and who use the n-word with hard r in a derogatory manner should be banned on sight, but I am not the best person to evaluate how we are supposed to practically properly investigate this.
I believe that this could complement a bad precedent, for example to the use of the N word mainly to the off-site, we know that Fandom prohibits the use of the same, but for example Discord is not Fandom and if we talk about moral regulations or legalities, I will put me as an example, in my country the N word is not an insult nor a slur, even when I was in school years ago there were many classmates who had nicknames with the same word whether they were white or black skin, another example would be a singer from my country who had to change his artistic name because in the U.S the use of the word was considered as a derogatory or insulting word and I emphasize again that in my country it is not so if for example someone sees me saying that word in discord and want to ban me because I said that word, that has to do the rules of fandom or the forum against my off-site? they will ban me anyway? I won't say it in fandom but if I want I can say it off-site and unless the same platform where I say it forbids it then nothing can be done. The only way I see this as viable is if there is a continuous precedent of using it by aggressive means, insults, harassment from one user to another that could significantly affect the user. But this regulation seems to be trying to put fear into users who continually use slurs off-sites either mostly as a joke or in shit-talks with their friends and will always be afraid or wary that someone will SS them and report them.
 
I won't say it in fandom but if I want I can say it off-site and unless the same platform where I say it forbids it then nothing can be done. The only way I see this as viable is if there is a continuous precedent of using it by aggressive means, insults, harassment from one user to another that could significantly affect the user. But this regulation seems to be trying to put fear into users who continually use slurs off-sites either mostly as a joke or in shit-talks with their friends and will always be afraid or wary that someone will SS them and report them.
Well, again, I think Grath's proposal covers this, and it pretty much covers the way you see this being viable. Below is her clarification near the end of her OP about what these rules are and aren't intended to cover:
"These rules are intended to encapsulate instances in which actions by a user off-site could be reasonably construed as inconducive to the safety/wellbeing of another user, such as threatening violence against someone on the basis of their race, stating intention to mistreat a particular religious group, or partaking in a known LGBT hate group. These rules aren’t intended to encapsulate instances of saying the n-word in a casual server on Discord, partaking in taboo fetishes in a wholly private and consensual environment, or being a critic of a particular group of people with no harmful intentions towards them."
 
Status
Not open for further replies.
Back
Top