Quibster
She/Her- 2,060
- 849
@Agnaa has allowed me permission me to create this Staff Thread.
The Problem with AI
This might go without saying at this juncture: the use of AI (e.g., ChatGPT, Copilot, Gemini, Grok) has been gradually rising on this wiki. I've seen several members, and some few staff members post lines of reasoning that amount to "According to ChatGPT", and "And ChatGPT says," as counterarguments/evidence/sources in debates. Seeing this is frustrating, as it can oftentimes lead to discussions/debates being derailed.
While I do not believe the use of AI is a problem by itself (e.g., used Fun & Games, Memes, et cetera), I believe it's fair to say that there should be a line drawn in the sand. I'll go into some few reasons why AI should be restricted on this wiki, alongside the implications of what state of the art AI's are capable of; for those who may be uninformed and as key talking points.
AI is Harder to Detect Than Before
Yale Daily News recently published an article back in November 2024 discussing how AI has evolved since it's integration into the school's curriculum. In the linked article, Anya Geist reports about a project headed by four researchers at Yale's School of Medicine, who have researched how AI generated essays have become harder to distinguish:
Anya writes: "In a project organized by four researchers, including three from the School of Medicine, researchers tasked readers with blindly reviewing 34 essays, 22 of which were human-written and 12 which were generated by artificial intelligence. Typically, they rated the composition and structure of the AI-generated essays higher."
More prevelantly, articles states that "the readers only accurately distinguished between AI and human essays 50 percent of the time." and "According to Yale College, inserting AI-generated text into an assignment without proper attribution violates academic integrity."
I believe Dr. Lee Schwamm's quote in the article best illustrates my point here: “How would we even know, other than the word of the author, whether the paper was assisted by generative AI?”
So let’s think critically about this: what happens to a member who admits to using AI, or worse, is caught using it without telling? How about for several week, months, or even years? Should we impose warnings, maybe even bans? And what about those who do cite AI as a writing tool? What precedent do we set there? With AI now present in higher places of learning, identifying AI is akin to flipping a coin-- it's just that much harder to detect now. This brings me to the idea around Hanlon's Razor: if AI becomes a standard in education, it stands to reason that it's use might be normalized more than ever. "If an AI wrote it, then why would it be wrong?", is a fairly common viewpoint that I consistently see both IRL and online. Therefore, AI should be regulated in some way before it becomes misused.
Having said all of this, I want to emphasize that I firmly believe that there needs to be precedent. Anyone on this wiki should be able to demonstrate their own understanding, rather than relying on AI to do the heavy lifting. It's not hard to imagine that there are those who believe that wholely generated answers from AI are as credible as a human's, or even more accurate. This brings me brings me to my next talking point:
AI Also Learns: Predictive Responses that Appeal to Bias
AI-Generated responses are typically predictive in that, they provide answers curated for you, using the data they have been trained on-- which means that quality and accuracy can vary significantly. Anyone can also train an AI to provide answers in a way that makes sense to a user.
For instance, if an AI model has been used to answer questions that lean toward specific viewpoints on a given subject, it may generate content that mislead or even reinforces those biases. In practice, this has been demonstrated shown in people who source AI as credible, without questioning the origin of it's answers.
Oftentimes an AI doesn't cite it's source or is simply unable to; therefore, it's possible for a user can credit ChatGPT as the source, without ever knowing the veracity of what they've generated is true or not. This is clearly a problem that should also be addressed.
Conclusion
It's evident that AI can one's undermine critical thinking, media literacy, and compromise the ability to participate in a discussion/debate. It also has the potential to be disruptive; you can search "ChatGPT" in the site's searchbar to see what kind of toxcicity it leads to. Therefore, I propose the following:
Agree: Antvasima, DarkDragonMedeus, Flashlight237
Disagree: Vzearr, Damage3245, DontTalkDT, Agnaa, TWILIGHT-OP, Nierre
Neutral:
The Problem with AI
This might go without saying at this juncture: the use of AI (e.g., ChatGPT, Copilot, Gemini, Grok) has been gradually rising on this wiki. I've seen several members, and some few staff members post lines of reasoning that amount to "According to ChatGPT", and "And ChatGPT says," as counterarguments/evidence/sources in debates. Seeing this is frustrating, as it can oftentimes lead to discussions/debates being derailed.
While I do not believe the use of AI is a problem by itself (e.g., used Fun & Games, Memes, et cetera), I believe it's fair to say that there should be a line drawn in the sand. I'll go into some few reasons why AI should be restricted on this wiki, alongside the implications of what state of the art AI's are capable of; for those who may be uninformed and as key talking points.
AI is Harder to Detect Than Before
Yale Daily News recently published an article back in November 2024 discussing how AI has evolved since it's integration into the school's curriculum. In the linked article, Anya Geist reports about a project headed by four researchers at Yale's School of Medicine, who have researched how AI generated essays have become harder to distinguish:
Anya writes: "In a project organized by four researchers, including three from the School of Medicine, researchers tasked readers with blindly reviewing 34 essays, 22 of which were human-written and 12 which were generated by artificial intelligence. Typically, they rated the composition and structure of the AI-generated essays higher."
More prevelantly, articles states that "the readers only accurately distinguished between AI and human essays 50 percent of the time." and "According to Yale College, inserting AI-generated text into an assignment without proper attribution violates academic integrity."
I believe Dr. Lee Schwamm's quote in the article best illustrates my point here: “How would we even know, other than the word of the author, whether the paper was assisted by generative AI?”
So let’s think critically about this: what happens to a member who admits to using AI, or worse, is caught using it without telling? How about for several week, months, or even years? Should we impose warnings, maybe even bans? And what about those who do cite AI as a writing tool? What precedent do we set there? With AI now present in higher places of learning, identifying AI is akin to flipping a coin-- it's just that much harder to detect now. This brings me to the idea around Hanlon's Razor: if AI becomes a standard in education, it stands to reason that it's use might be normalized more than ever. "If an AI wrote it, then why would it be wrong?", is a fairly common viewpoint that I consistently see both IRL and online. Therefore, AI should be regulated in some way before it becomes misused.
Having said all of this, I want to emphasize that I firmly believe that there needs to be precedent. Anyone on this wiki should be able to demonstrate their own understanding, rather than relying on AI to do the heavy lifting. It's not hard to imagine that there are those who believe that wholely generated answers from AI are as credible as a human's, or even more accurate. This brings me brings me to my next talking point:
AI Also Learns: Predictive Responses that Appeal to Bias
AI-Generated responses are typically predictive in that, they provide answers curated for you, using the data they have been trained on-- which means that quality and accuracy can vary significantly. Anyone can also train an AI to provide answers in a way that makes sense to a user.
For instance, if an AI model has been used to answer questions that lean toward specific viewpoints on a given subject, it may generate content that mislead or even reinforces those biases. In practice, this has been demonstrated shown in people who source AI as credible, without questioning the origin of it's answers.
Oftentimes an AI doesn't cite it's source or is simply unable to; therefore, it's possible for a user can credit ChatGPT as the source, without ever knowing the veracity of what they've generated is true or not. This is clearly a problem that should also be addressed.
Conclusion
It's evident that AI can one's undermine critical thinking, media literacy, and compromise the ability to participate in a discussion/debate. It also has the potential to be disruptive; you can search "ChatGPT" in the site's searchbar to see what kind of toxcicity it leads to. Therefore, I propose the following:
- Prohibit the use of AI-generated posts (including but not limited to; Versus Threads, Content Revision Threads, Question and Answer threads, Calc Group Threads, and Staff Threads.) and using citations of AI (such as "ChatGPT" or similar AI programs).
- AI Generated Response should be listed as a fallacy on the Fallacies Page. This is to more concretely dissuade the use of AI in debates.
Agree: Antvasima, DarkDragonMedeus, Flashlight237
Disagree: Vzearr, Damage3245, DontTalkDT, Agnaa, TWILIGHT-OP, Nierre
Neutral:
Last edited: