• This forum is strictly intended to be used by members of the VS Battles wiki. Please only register if you have an autoconfirmed account there, as otherwise your registration will be rejected. If you have already registered once, do not do so again, and contact Antvasima if you encounter any problems.

    For instructions regarding the exact procedure to sign up to this forum, please click here.
  • We need Patreon donations for this forum to have all of its running costs financially secured.

    Community members who help us out will receive badges that give them several different benefits, including the removal of all advertisements in this forum, but donations from non-members are also extremely appreciated.

    Please click here for further information, or here to directly visit our Patreon donations page.
  • Please click here for information about a large petition to help children in need.

Banning the use of AI and ChatGPT

Quibster

She/Her
Messages
2,060
Reaction score
849
@Agnaa has allowed me permission me to create this Staff Thread.



The Problem with AI

This might go without saying at this juncture: the use of AI (e.g., ChatGPT, Copilot, Gemini, Grok) has been gradually rising on this wiki. I've seen several members, and some few staff members post lines of reasoning that amount to "According to ChatGPT", and "And ChatGPT says," as counterarguments/evidence/sources in debates. Seeing this is frustrating, as it can oftentimes lead to discussions/debates being derailed.

While I do not believe the use of AI is a problem by itself (e.g., used Fun & Games, Memes, et cetera), I believe it's fair to say that there should be a line drawn in the sand. I'll go into some few reasons why AI should be restricted on this wiki, alongside the implications of what state of the art AI's are capable of; for those who may be uninformed and as key talking points.

AI is Harder to Detect Than Before

Yale Daily News recently published an article back in November 2024 discussing how AI has evolved since it's integration into the school's curriculum. In the linked article, Anya Geist reports about a project headed by four researchers at Yale's School of Medicine, who have researched how AI generated essays have become harder to distinguish:

Anya writes: "In a project organized by four researchers, including three from the School of Medicine, researchers tasked readers with blindly reviewing 34 essays, 22 of which were human-written and 12 which were generated by artificial intelligence. Typically, they rated the composition and structure of the AI-generated essays higher."

More prevelantly, articles states that "the readers only accurately distinguished between AI and human essays 50 percent of the time." and "According to Yale College, inserting AI-generated text into an assignment without proper attribution violates academic integrity."

I believe Dr. Lee Schwamm's quote in the article best illustrates my point here: “How would we even know, other than the word of the author, whether the paper was assisted by generative AI?

So let’s think critically about this: what happens to a member who admits to using AI, or worse, is caught using it without telling? How about for several week, months, or even years? Should we impose warnings, maybe even bans? And what about those who do cite AI as a writing tool? What precedent do we set there? With AI now present in higher places of learning, identifying AI is akin to flipping a coin-- it's just that much harder to detect now. This brings me to the idea around Hanlon's Razor: if AI becomes a standard in education, it stands to reason that it's use might be normalized more than ever. "If an AI wrote it, then why would it be wrong?", is a fairly common viewpoint that I consistently see both IRL and online. Therefore, AI should be regulated in some way before it becomes misused.

Having said all of this, I want to emphasize that I firmly believe that there needs to be precedent. Anyone on this wiki should be able to demonstrate their own understanding, rather than relying on AI to do the heavy lifting. It's not hard to imagine that there are those who believe that wholely generated answers from AI are as credible as a human's, or even more accurate. This brings me brings me to my next talking point:

AI Also Learns: Predictive Responses that Appeal to Bias

AI-Generated responses are typically predictive in that, they provide answers curated for you, using the data they have been trained on-- which means that quality and accuracy can vary significantly. Anyone can also train an AI to provide answers in a way that makes sense to a user.

For instance, if an AI model has been used to answer questions that lean toward specific viewpoints on a given subject, it may generate content that mislead or even reinforces those biases. In practice, this has been demonstrated shown in people who source AI as credible, without questioning the origin of it's answers.

Oftentimes an AI doesn't cite it's source or is simply unable to; therefore, it's possible for a user can credit ChatGPT as the source, without ever knowing the veracity of what they've generated is true or not. This is clearly a problem that should also be addressed.

Conclusion

It's evident that AI can one's undermine critical thinking, media literacy, and compromise the ability to participate in a discussion/debate. It also has the potential to be disruptive; you can search "ChatGPT" in the site's searchbar to see what kind of toxcicity it leads to. Therefore, I propose the following:
  • Prohibit the use of AI-generated posts (including but not limited to; Versus Threads, Content Revision Threads, Question and Answer threads, Calc Group Threads, and Staff Threads.) and using citations of AI (such as "ChatGPT" or similar AI programs).
  • AI Generated Response should be listed as a fallacy on the Fallacies Page. This is to more concretely dissuade the use of AI in debates.
It's use in Fun & Games or General Discussions should not treated with similar restrictions. I can see the novelty in AI for what it is, but it very clearly does not belong in the listed channels. It should be generally discouraged otherwise, if not looked down upon in my opinion.



Agree: Antvasima, DarkDragonMedeus, Flashlight237

Disagree: Vzearr, Damage3245, DontTalkDT, Agnaa, TWILIGHT-OP, Nierre

Neutral:
 
Last edited:
Yeah no, I'm gonna have to disagree, we strive for things to be correct, if AI is correct, then so be it. Sure it can be annoying, however AI can be correct, it's our jobs to debunk it if its a debate. If we can't, then so be it. QnA threads are a bit different though.
 
I don't think we can credit ChatGPT as a direct source of course, (it'd be a less reliable version of saying "According to wikipedia, XYZ") but I don't see how this is a good reason to warn/ban people for using AI to assist writing their arguments/counter-arguments.

If the AI produces argumentation that is flawed or inaccurate, then all is required would be to debunk it just like you'd debunk a fallible human writing flawed or inaccurate arguments.
 
I agree with the others.

"An AI said it" is not reliable, but I see no reason to care if someone uses it to write their posts or provide their opinions. People can have bad opinions just fine on their own.

And I don't think a person demonstrating their own understanding is important, outside of promotion to staff positions where it would be.
 
I am heavily against this, not only due to the reasons above, but also due to how much AI helps users, who are much worse in English than others since I can assure you a large majority here doesn't speak English as a native language, and some are far better than others at it. Are we gonna limit the use of scaling due to a person's ability to use the English language correctly?

For example, I often use Grammarly, an AI-based help for correct spelling and grammar, Would that also be disallowed? As well as all other translation and spelling help for users, less capable in the English language?

To conclude my point is that this would only limit the amount of users capable of interacting with this site.

Permission granted for making this comment by @Agnaa
 
Yeah no, I'm gonna have to disagree, we strive for things to be correct, if AI is correct, then so be it. Sure it can be annoying, however AI can be correct, it's our jobs to debunk it if its a debate. If we can't, then so be it. QnA threads are a bit different though.
I don't think we can credit ChatGPT as a direct source of course, (it'd be a less reliable version of saying "According to wikipedia, XYZ") but I don't see how this is a good reason to warn/ban people for using AI to assist writing their arguments/counter-arguments.

If the AI produces argumentation that is flawed or inaccurate, then all is required would be to debunk it just like you'd debunk a fallible human writing flawed or inaccurate arguments.

Yes, AI can occasionally produce accurate information, however this doesn't negate the fact that it can just as easily generate false or misleading answers. As I mentioned in the OP, the quality of its outputs heavily depends on the data it was trained on. This is compounded by members already taking many things that others say at face value as is.

If I was debating with a copy-pasted AI response, I think it's fair to say that I would like for my posts to be engaging with an actual person, rather than an AI. At that point who am I debating with, really?

I am heavily against this, not only due to the reasons above, but also due to how much AI helps users, who are much worse in English than others since I can assure you a large majority here doesn't speak English as a native language, and some are far better than others at it. Are we gonna limit the use of scaling due to a person's ability to use the English language correctly?
This is a fair concern to raise, but I think it's clear that my intent was absolutely not to gatekeep how peoplpe translate their posts. Admittedly I hadn't taken this into consideration either. So for that, I apologize. Most non-english speaking folks use DeepL for translation anyhow, which, is an AI tool made for translation.

For example, I often use Grammarly, an AI-based help for correct spelling and grammar, Would that also be disallowed? As well as all other translation and spelling help for users, less capable in the English language?
Firstly, Grammarly is fundamentally different tool and did not start out as an all-purpose generative AI like ChatGPT. It was specifically designed to assist in identifying how you could improve your essays. It's also tool that has existed roughly since the late 2010's and I even remember it being used in college before the big boom in Generative AI. You were also required to cite it as a utilized tool back in that period; though the standards today may not be the same.

Secondly, I'm specifically making a case against using generative AI to construct entire arguments/points, without mentioning it's use; emphasized by how it can be used in misleading and dishonest ways in that context. To illustrate it this way: I'm not against AI as a tool for identifying grammatical errors in essays, rather, I'm against how AI tools like ChatGPT can absolutely be abused to fabricate entire posts/arguments from the ground-up.

As a footnote to this, I had very purposefully outlined which AI tools I were referring to in the very first sentence of the OP. The common trend among them is fairly self-explanatory.

Edit: Vote count updated
 
Last edited:
I agree; I know various users who have used AI as a debating method or even tried to argue that "AI tool is objectively more reliable than anyone here, I spent 20 dollars on that shit." I also noticed we had a similar discussion a year and a half ago. But yeah, I agree with banning AI to do any sort of debating for anyone.
 
Yes, AI can occasionally produce accurate information, however this doesn't negate the fact that it can just as easily generate false or misleading answers. As I mentioned in the OP, the quality of its outputs heavily depends on the data it was trained on. This is compounded by members already taking many things that others say at face value as is.
And when it does, we have the safeguard of staff evaluation to weed out that sort of thing.
If I was debating with a copy-pasted AI response, I think it's fair to say that I would like for my posts to be engaging with an actual person, rather than an AI. At that point who am I debating with, really?
The person who saw that response and thought "yeah that makes enough sense to post".

Also, it's not like we have a storied tradition of only allowing arguments written by the person posting them. We allow people to post argument from other users, off-site (including YouTube), and even from banned users.
 
I agree with this thread myself. I pushed for a ban on AI myself two years ago back when AI is practically incapable of doing much of value. All that managed to happen was AI art specifically was banned. I would've been happier with a full AI ban myself. I ain't stupid.
 
A problem here is that artificial intelligence can easily be instructed to argue for either side of an argument, and is virtually tireless and can easily and very quickly write up long, but fallacious, arguments that would further exhaust our staff to continuously have to argue against. As such it seems dishonest and counter-productive for our purposes. 🙏
 
A problem here is that artificial intelligence can easily be instructed to argue for either side of an argument, and is virtually tireless and can easily and very quickly write up long, but fallacious, arguments that would further exhaust our staff to continuously have to argue against. As such it seems dishonest and counter-productive for our purposes. 🙏
All the more reason to keep AI out of Vs Debates then.
 
A problem here is that artificial intelligence can easily be instructed to argue for either side of an argument, and is virtually tireless and can easily and very quickly write up long, but fallacious, arguments that would further exhaust our staff to continuously have to argue against. As such it seems dishonest and counter-productive for our purposes. 🙏
Such stonewalling is already very plausible, and we already have ways to deal with it, so I'm not worried.
 
Such stonewalling is already very plausible, and we already have ways to deal with it, so I'm not worried.
What ways? Our staff cannot be expected to spend many hours per thread arguing with long A.I. responses generated in a few seconds. We also cannot turn our interactions here completely dishonest, shallow, and artificial, with no humanity left in them.
 
A problem here is that artificial intelligence can easily be instructed to argue for either side of an argument, and is virtually tireless and can easily and very quickly write up long, but fallacious, arguments that would further exhaust our staff to continuously have to argue against. As such it seems dishonest and counter-productive for our purposes. 🙏
How often does that actually happen on here? If people are posting comments entirely without substance, then staff will be able to see through it pretty quickly.
 
I also severely doubt people will use AI as often as it may seem. AI isn't capable of outdebating current debaters yet, AI will be shut up pretty quickly if it makes fallacious arguments, embarrassing users who used it, making them less likely to use it without actually reading it.
 
What ways? Our staff cannot be expected to spend many hours per thread arguing with long A.I. responses generated in a few seconds. We also cannot turn our interactions here completely dishonest, shallow, and artificial, with no humanity left in them.
If a user incessantly argues with nonsense, staff-only threads have the easy escape valve of revoking their permission. In ordinary CRTs, staff can say that a user is stonewalling and refuse to engage with them further. And in matches, people can just be unconvinced and place their votes regardless.
 
Well, I don't mind our members openly admitting to asking A.I. to provide additional matter of fact data, such as if a mathematical calculation makes sense or not. I just do not want us to be forced into being spammed and overwhelmed with long a.i.-generated texts of content in replies to our posts. It would turn our work here unmanageable, and I tend to be good at foreseeing potential problematic social developments, so I would much prefer to prevent this problem before it begins to get out of hand. 🙏
 
Basically, I have a lot of experience of asking the most advanced publicly available model of ChatGPT lots of indepth information-gathering questions with following analytical conversations, and I think that the program can very much potentially be used as a substitute for arguing independently, so I do not want any dishonest usage of this resource in order to systematically overwhelm people who argue for another side. 🙏
 
If people want to systematically overwhelm someone then they can just as well take an AI text and slightly reformulate it for the same amount of output. After that whether it's AI or not can't be cleanly determined anymore. If someone is actually intending to abuse AI to just spam arguments such rules won't stop it.
Meanwhile, they deprive users of the benefits of AI usage, such as improved formulation, brainstorming ideas etc. All perfectly fair viable uses.

Instead of a blanket ban, there’s a simple and more differentiated solution: If someone abuses AI, you can just tell them in that particular situation to stop using it. We, the staff, can reserve the right to restrict AI usage for specific users in specific threads if it becomes a problem, rather than penalizing everyone.
When dealing with lengthy, AI-generated text that seems designed to systematically stonewall, the good old "TL;DR summarize in your own words pls" should resolve the issue.
 
Last edited:
The person who saw that response and thought "yeah that makes enough sense to post".
I very clearly emphasized that because it's harder to distinguish, it can deceptively be used as a stand-in for particpating in a debate. Even then, there are several cases of people following that exact line of logic, whilst operating off of the assumption that AI is an infallable source of information; even without cited sources or human verified evidence to back it up.

I think Ant raises a fair point too; AI is capable of generating several pages of content, which can lead to a flagrant back-and-forth off of erroneous info; simply for the benefit of doing so/ stonewalling in a debate. This is primarily what should be prevented, if not warned against.
How often does that actually happen on here? If people are posting comments entirely without substance, then staff will be able to see through it pretty quickly.
Here are some noteable cases worth pointing out:
Most of these cases also do not have any Staff present either. In practice, "staff seeing to it" doesn't exactly fill me with much confidence, no offense.
I think that the program can very much potentially be used as a substitute for arguing independently, so I do not want any dishonest usage of this resource in order to systematically overwhelm people who argue for another side.
You've communicated this far better than I could've, methinks.
If people want to systematically overwhelm someone then they can just as well take an AI text and slightly reformulate it for the same amount of output. After that whether it's AI or not can't be cleanly determined anymore. If someone is actually intending to abuse AI to just spam arguments such rules won't stop it.
Meanwhile, they deprive users of the benefits of AI usage, such as improved formulation, brainstorming ideas etc. All perfectly fair viable uses.


Instead of a blanket ban, there’s a simple and more differentiated solution: If someone abuses AI, you can just tell them in that particular situation to stop using it. We can reserve the right to restrict AI usage for specific users in specific threads if it becomes a problem, rather than penalizing everyone.
I think the barest minimum would be to require users to share that ChatGPT was used where it has been used. I see some people crediting ChatGPT in calc threads, which, can be correct in a mathematic/scientific sense more often than scaling a niche work or popular fiction; this distinction is important, I feel.
 
Last edited:
I think the barest minimum would be to require users to share that ChatGPT was used where it has been used.
Why? If the argument is coherent and meaningful enough that you can't tell whether it's AI-generated, it already demands engagement like any other user's post. If that's the case you couldn't really enforce the rule anyway, as the only posts against which you could enforce it are those where the remark would be unnecessary.

Honestly, I personally find it rather strange to credit AI in calc threads, as it has no relevance in them either. Calcs need to stand independent of that source regardless.

Overall, it just seems to unnecessarily discourage people from using it, given the often negative climate regarding AI use. I know I wouldn't want to add "this post has been partially done with AI" every time I ask for some suggestions on how to improve a formulation of some idea I already had.
 
Why? If the argument is coherent and meaningful enough that you can't tell whether it's AI-generated, it already demands engagement like any other user's post.

Honestly, I personally find it rather strange to credit AI in calc threads, as it has no relevance in them either. Calcs need to stand independent of that source regardless.
See my response to Damage in my previous post. Calcs using AI-generated math/science/scaling very much exist on this wiki, even the citation of such. AI is also not always correct when used in this way, therefore it should be cited.

Overall, it just seems to unnecessarily discourage people from using it, given the often negative climate regarding AI use. I know I wouldn't want to add "this post has been partially done with AI" every time I ask for some suggestions on how to improve a formulation of some idea I already had.
So then, having a user cite that they use ChatGPT in their posts is discouraging? It's pretty standard protocol to site your sources when debating, so I don't think this is an issue. People used Wikipedia all the time back in the day to cite their sources, despite everyone having used it. You would be accused of plagiarising otherwise.
 
@DontTalkDT

How about if we at least demand that our members should openly admit if they have used artificial intelligence programs for further logical analysis then?

And our staff members should obviously also feel free to use it in order to better understand what they are evaluating, in conjunction with asking other members for summarising explanations. 🙏
 
Back
Top