15

The company embraces the AI hype and that's great. Together with the ethicalest and socially responsiblest AI providers1 the company is ensuring the internet is in the right hands. Naysayers will claim this is all about converting the immense community effort that has made this network possible into dividends and profits for the leadership and owners of the company, and destroying the product itself in the process.

While there have been attempts and gestures to keep the bad AI people at bay, with the appearance of Answer Bot everyone can now see what the real strategy is here:

  1. poison the network with known LLM-generated content so that scrapers training on the content going forward are guaranteed to train on garbage (just in case community moderation manages to filter out third-party genAI content)
  2. tip off the good AI people to know what posts not to use for training going forward.

I applaud the company's resolve to save our future. The only minor issue2 is the genAI content policy on certain sites. The company will have to do something about that. So when is the genAI policy going to be suspended? (I was hoping it would come after the next Community Sprint result announcement, as it usually is with unpopular news these days, but that would feel too long currently).

1Ethical and socially responsible not because they pay a lot of money to the company, but because they say so themselves.
2I'm assuming here that the moderators and curators will keep doing what they are doing even if the genAI policy is abolished, because let's face it, they should've stopped long ago and yet here they are.

5
  • 14
    This doesn’t seem constructive.
    – Jeremy
    Commented Dec 6, 2024 at 2:39
  • 14
    @Jeremy It is a very current question. To be blunt, there's a significant risk (read: active threat by the company, by indirect admission) that this sooner or later makes its way to Stack Overflow, where there is a hard ban against this use of AI. If they do push it to SO, they also have to remove the genAI ban (again). Dunno about you, but I'd rather they say that's what they're planning to do in advance rather than waking up to them having removed it At Some Point in a few days to a few months. Commented Dec 6, 2024 at 4:01
  • 5
    "poison the network with known LLM-generated content" The users are quite capable of doing that themselves. There was this high rep user on SO who posted single-handedly posted over 1000 such answers last year. Despite moderator efforts, I'd say the site is so badly contaminated at this point that it can't be used for training GenAI.
    – Lundin
    Commented Dec 6, 2024 at 15:12
  • 6
    I am genuinely confused if this question is being sarcastic or not. The AI pushers are so blindly and wildly enthusiastic about their terrible technology that it's hard to mock them without seeming like one of them.
    – miken32
    Commented Dec 6, 2024 at 19:50
  • 10
    @miken32 This is 100% sarcasm. Commented Dec 7, 2024 at 14:45

2 Answers 2

23

There are no current or future plans, moderators know of, that would include suspension of Gen AI policies on sites that don't allow posting AI generated content.

Keeping AI policies and allowing moderators to handle AI generated content was negotiated as part of ending the moderator strike Moderation strike: Conclusion and the way forward and current policy that regulates AI content has been presented in (Interim) Policy on AI-content detection reports.

Even though the title there says the policy is Interim, no changes that would make it worse regarding moderators ability to moderate user posted AI content would be agreed upon.

If company decides to one-sidedly revoke AI policy, Stack Exchange would face immediate (mass) moderator resignations on the most important sites in the network. The AI policy is non-negotiable.

10
  • 3
    So what would happen if, in theory, the company will add the Answer Bot user on sites that don't allow GenAI content and have it answer questions? Would it violate the agreement? (if the agreement is just along the lines of "we don't let people post GenAI content".) Commented Dec 6, 2024 at 14:38
  • 3
    Thank you for the mod perspective. Do you believe that top management thinks a mass mod resignation would be bad for the profit margin? I can't tell, but they are so detached from the community that I wouldn't bet. Commented Dec 6, 2024 at 14:39
  • 7
    @ShadowWizard Moderators will and are always fiercely fighting against any AI Franken-features that can hurt the sites. We all firmly stand on "no user AI posting". I cannot say more about other things that are not publicly announced. Main point of my answer here is that people don't start thinking that posting AI is allowed or will be allowed. I want to nip such thoughts in the bud. Commented Dec 6, 2024 at 14:44
  • 9
    @AndrasDeak--СлаваУкраїні I cannot say what top management thinks. Most of the time I would say they don't think at all. Commented Dec 6, 2024 at 14:46
  • 11
    Any feature like the leaked one that generates AI answers (even if they are reviewed in some way) would make the blanket AI bans we have on many sites untenable. The rules would be essentially that AI answers by the system are okay, AI answers by users are not, which is very hard to justify. Commented Dec 6, 2024 at 15:02
  • 5
    @MadScientist Yes, it would be rather hard to justify AI ban while simultaneously having some AI bot crapping around. But nuking AI bot and its posts takes seconds, nuking AI posts made by users is hard work. Commented Dec 6, 2024 at 15:11
  • 5
    "There are no current or future plans, moderators know of, that would include suspension of Gen AI policies on sites that don't allow posting AI generated content." I wouldn't have phrased it that black-and-white. There are such an amount of plans, scenarios and ideas floating around that I really wouldn't be comfortable stating it like that.
    – Mast
    Commented Dec 6, 2024 at 15:20
  • 10
    Obligatory note that we wouldn't be told if they were planning to do this anyway. Last time, it was abruptly unbanned without mods being warned. Mods not being informed about plans to do so isn't indicative of anything given the current state of the company Commented Dec 6, 2024 at 17:57
  • 3
    @Zoe-Savethedatadump That is why I wrote "moderators know of". And that is why I put the last paragraph there. Commented Dec 6, 2024 at 18:20
  • 6
    @MadScientist A single, sanctioned pseudo-user posting AI answers for review is something completely different than dozens, hundreds or more regular users posting AI answers pretending they are regular answers. Mind, I am not weighing if either is good, but they are certainly rather different. Notably, only one ticks the marks of SO's initial ban motivation. Commented Dec 7, 2024 at 5:26
-15

Non-company perspective: The genAI policy was introduced to prevent influx of unreliable information and to make sure that sources are attributed properly as far as I remember the discussions from December 2022. Therefore, as soon as genAI can actually reliably provide that information and give proper attribution, there is no obvious reason to uphold that ban anymore. If you ask me, it can probably take 6-8 more years to reach that state, but the future is a bit difficult to predict.

From the comments there seems to be a notion that either AI technology will remain on this level or suddenly jump from being unreliable in Q&A to being totally reliable in the future, which then eliminates the need and will of humans to create collaborative or competing Q&A.

I think there will be a prolonged transition period between the two states where automated systems will be much more reliable than now (at least on the level of a good answerer) but not perfect yet and there will be a demand for combined human and AI knowledge generation in form of Q&As. However, if human interest in participating in Q&A declines faster than that, which remains to be seen, well it doesn't really matter when the company exactly suspends the AI policy as long as it's not doing it before AI becomes more reliable and can give attribution.

As an example of what might already work today: have a Q&A site where new questions are first answered by bot but only if the bot feels that it can confidently answer and if not or if the asker is not satisfied by this first approach, the question is forwarded to the human answerers section, which remains strictly human organized and features top functionality with search and everything. The balancing would be so that a maximal number of questions is answered correctly. This would in my eyes combine the better parts of all worlds currently existing.

18
  • 11
    It's not just the unreliable part that's the problem. The problem is that it's elaborately phrased unreliable information. Almost as if chatbots only knew how to talk the talk. Commented Dec 6, 2024 at 8:01
  • 8
    " as soon as genAI can actually reliably provide that information and give proper attribution" - there's no need for a Q&A site then, outside perhaps as a datasource . There's no good outcome for the nextwork from genAI, except perhaps a short term source of revenue IMO Commented Dec 6, 2024 at 8:01
  • @JourneymanGeek "there's no need for a Q&A site then" Collaborations between humans and AI maybe? You can still ask the questions or edit the AI answers or put up your own competing answers and there is no strict need to give up completely just because... Commented Dec 6, 2024 at 8:04
  • 11
    Why would someone ask a question here instead of cutting out the middleman and asking the perfectly reliable AI directly? The AI doesn't care if your question is on-topic, or how subjective it is, or whether it meets all the quality guidelines. It will chat with you privately and give you one answer instead of a group of people having discussions about your question and putting up competing answers. If this network is going to be a bunch of AI bots answering questions, it has no point. SE's mission now is to convince people to curate data for AI models w/o compensation.
    – ColleenV
    Commented Dec 6, 2024 at 12:21
  • @ColleenV I didn't say "perfectly" reliable. That's something we even don't require from human answerers, so it wouldn't make sense to set the bar higher for bots. What I said to JourneymanGeek above: collaboration between humans and AI is a possible mode for this platform in the future. And this can result in an even higher quality and people asking here instead. But if we are all doomed anyway, it doesn't really matter. Commented Dec 6, 2024 at 14:46
  • 10
    All this "AI will have solved all its problems in X years" stuff is starting to remind me of string theory, which has been a decade away for like 30 years. If you have to keep saying that the solution to all our problem is at X point in the future, it's probably not something we need to think about for a very long time. Commented Dec 6, 2024 at 15:12
  • 9
    Also, genAI in its current form will always hallucinate due to fun mathematical byproducts of how current LLMs work. LLMs built on current tech just will not be reliable all or most of the time Commented Dec 6, 2024 at 15:15
  • 4
    My point was, why would someone ask here instead of asking the AI directly if it can provide good answers? If I'm going to reinforcement train an AI, or curate data for it, I'm not going to do it for free for a trillion dollar corporation like Alphabet; I'm going to train my own personal AI (which my husband is setting up for us as I'm typing this lol). AI is a tool. I don't collaborate with tools, I use them.
    – ColleenV
    Commented Dec 6, 2024 at 15:24
  • @Zoe-Savethedatadump Yes, current technology is unreliable. I certainly assume that in the answer. But it may not remain like that. And humans also aren't 100% reliable the bar should not be set too high. Something one could do already now is training bots not to answer if they aren't that sure. That would already increase accuracy. Commented Dec 9, 2024 at 7:58
  • 1
    @ColleenV I think I understand you. And I agree that it's absolutely your right not to collaborate or compete with AI if you don't want that. But are you sure that everyone is thinking like you? Or maybe some people will actually get paid and work there? And then, why already today keep contributing to Q&A. All your public contributions will anyway be used. And of course you can have your personally trained AI, but isn't the whole idea here to share ideas and learn from each other. I'm not sure a personal AI can achieve the same. Commented Dec 9, 2024 at 8:03
  • 2
    As an example of what could already work today: have a Q&A site where new questions are first answered by bot if the bot feels that it can confidently answer it and if not or if the asker is not satisfied, the question is forwarded to the human answerers section. The balancing would be so that a maximal number of questions is answered correctly. This would in my eyes combine the best of all worlds currently existing. Commented Dec 9, 2024 at 8:19
  • 7
    @Zoe-Savethedatadump "always hallucinate". Interesting paper, but I would keep it even simpler. The current probability based LLM text generation can ONLY produce probabilistic "hallucinations". There is no "understanding", no "intelligence". What makes the tech kinda work is that the hallucinations often hallucinate right, but that is just because usually it is more probable that the training material weights point at "1492" as the next word in "America was discovered in" than... "Tofu" Commented Dec 9, 2024 at 9:52
  • 2
    I don't contribute here any longer. I just drop by once in a while to see how things are going. The last post I made on a non-meta site was in February. There's no need to have a whole separate site for AI answers-what is already planned is for people to ask the paywalled Google AI and if they don't get a good answer, forward the question to the human chumps on SE who will volunteer to provide the data that will effectively kill their community by building the AI bypass that diverts most traffic away from it.
    – ColleenV
    Commented Dec 9, 2024 at 20:50
  • 1
    Google's AI isn't paywalled. Neither is Microsoft's AI, or OpenAI's AI, or any of the others. There's already functionally unlimited access to free AI models, and they all consume disgusting amounts of power to do their thing. SE is trying to complete with a hype-driven oversaturated market where the costs are massive and the profits are non-existent. SE jumping head-first into AI isn't just bad for the community - it's probably the single biggest threat to the future of the network. On account of the company willingly doing this, SE, Inc. is the single biggest threat to the community - not AI Commented Dec 9, 2024 at 21:07
  • 3
    @Zoe-Savethedatadump It isn’t paywalled yet. The roadmap is to have a paid-for AI assistant directly in the IDE powered by Stack Exchange curated data that no-one else has easy access to for AI training (conveniently). Once it has significant enough market penetration, they’ll start charging for it, just like OpenAI is switching to a for-profit company. There are very few companies that do stuff to benefit humanity that don’t change their tune as soon as the amount of money involved gets large enough.
    – ColleenV
    Commented Dec 9, 2024 at 22:56

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .