This topic has been raised a couple of days ago by @Jay2k1 in this thread.
I believe it is important that we discuss and come up with a solution on how to treat AI generated content together – so the solution is not universally perfect but works for our community.
With a bit of research and thinking about it, I came up with the following list of assumptions:
There are several ways in which AI could be used and we need to distinguish those:
Fully AI (Copy-paste)
Translated with AI (we are international!)
Text edited with AI (when it is only used to make the text better understandable)
Researched with AI (AIs used to search for some details and the answer was written without their use)
There are potentially important considerations for me:
AI answers could be useful, especially for some that are not sure where else to search
The author that posts, takes the responsibility for their post. Before posting they should check the answer to the best of their ability
AI texts tend to be rather long (even though they are good structured), which is often excessive
Keeping that in mind, I would suggest we discuss the following addition to the rules of the forum:
Fully copy-pasted AI answers should not be in the Forum.
5 reports on misleading AI posts would lead to a ban of the author as the person who takes the responsibility for the content and should check it to the best of their ability and not post, if something in the response is outside of their expertise.
Other types of use should be explicitly mentioned in the text, be it the original message/ question or the answer.
Editing the texts with AI should not add much to their volume – it should only be used for improving clarity.
Let’s have a discussion.
Please let me know what you think,
Sara
To me two points are kind of important or at least stood out to me over the last weeks. These are observations about my own behavior & conclusions about what I value, not an an attempt at generalized statements. The points are:
Length of posts
Accuracy (wrong or misleading information)
I usually click on & read topics that I think are either interesting or where I think I might be able to contribute. AI-powered or assisted replies are sometimes sooooo long and detailed that I find myself tuning out somewhere in the middle. As a result nowadays I find myself completely abandoning topics that as soon as I see such a reply, 'cause to me this just isn’t worth my time, neither to read all of that, nor to verify any of it, nor to contribute my own thoughts or expertise. I even go so far that I intentionally skip topics that have replies by certain people in them which I suspect or know to use AI a lot. Basically “yeah OK their problem now”.
I find this kind of sad, not for me personally, but for the people asking the questions, as in general I think that having input from a diverse set of people is valuable.
As for accuracy, I think this is pretty self-explanatory. I get really angry at people wasting my time, and if their replies contain something that’s obviously hallucinated or pretty misleading I consider the whole reply to be worse than worthless. Again, this is my own, personal valuation.
On the other hand I’m fine with people posting AI solutions if they’ve validated that they actually do what they state they do. Sadly I’ve just read more than enough posts I suspected or was sure of were AI-generated where that validation clearly wasn’t done. Again, it’s a lot of waste of my time (and I’m not even the original poster!) as a reader, my brain power, and is pretty dishonest.
Coming back to your four suggested additions to the rules, Sara:
I concur with 1, 2 & 4 as-is
I find 2 especially important for me as a known potential consequence for not following the rules. One could debate if “5 incidents” is too high or low a number; I wouldn’t go higher, but maybe slightly lower to “3 incidents”
I’d argue that 3 is only really helpful if the use of AI is mentioned at the start of the post. Otherwise you’ll end up reading the whole post, only to find “produced with AI” at the bottom after reading everything, which reframes the whole post — if it was written by AI I have to be vigilant and skeptic (this is something Jay2k1 pointed out as well in their original question in the other thread you’ve linked to: “Personally, I don’t want to get to a point where I have to question the correctness of forum answers in the same way I need to do for AI answers.”)
I think the additional rules matches what is needed. Unverified AI answers are just a waste of time. So we should avoid having them in the forum. Rule 2 is needed to enforce rule 1.
We can do 3 reports.
About the reported posts – once reported, they are normally hidden – so no need to read through it all on the one hand.
The problem with those reported messages (if we hide them) is that with 1 report I will have to make a decision on whether or not it is actually false, which might be difficult for me.
The length we could I guess limit somehow (the way we do it now with the minimum limit). Not sure it will help that much though, as one could easily ask AI to make it shorter without actual human effort.