Other accounts:

@Danterious@lemm.ee

All of my comments are licensed under the following license

https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en

  • 21 Posts
  • 59 Comments
Joined 1 year ago
cake
Cake day: August 15th, 2023

help-circle














  • The point is to pick out the users that only like to pick fights or start trouble, and don’t have a lot that they do other than that, which is a significant number. You can see some of them in these comments.

    Ok then that makes sense on why you chose these specific mechanics for how it works. Does that mean hostile but popular comments in the wrong communities would have a pass though?

    For example let’s assume that most people on Lemmy love cars (probably not the case but lets go with it) and there are a few commenters that consistently shows up in the !fuck_cars@lemmy.ml or !fuckcars@lemmy.world community to show why everyone in that community is wrong. Or vice a versa

    Since most people scroll all it could be the case that those comments get elevated and comments from people that community is supposed to be for get downvoted.

    I mean its not that much of a deal now because most values are shared across Lemmy but I can already see that starting to shift a bit.

    I was reminded of this meme a bit

    Initially, I was looking at the bot as its own entity with its own opinions, but I realized that it’s not doing anything more than detecting the will of the community with as good a fidelity as I can achieve.

    Yeah that’s the main benefit I see that would come from this bot. Especially if it is just given in the form of suggestions, it is still human judgements that are making most of the judgement calls, and the way it makes decisions are transparent (like the appeal community you suggested).

    I still think that instead of the bot considering all of Lemmy as one community it would be better if moderators can provide focus for it because there are differences in values between instances and communities that I think should reflect in the moderation decisions that are taken.

    However if you aren’t planning on developing that side of it more I think you could probably still let the other moderators that want to test the bot see notifications from it anytime it has a suggestion for a community user ban (edit: for clarification) as a test run. Good luck.

    Anti Commercial-AI license (CC BY-NC-SA 4.0)


  • But in general, one reason I really like the idea is that it’s getting away from one individual making decisions about what is and isn’t toxic and outsourcing it more to the community at large and how they feel about it, which feels more fair.

    Yeah that does sound useful it is just that there are some communities where it isn’t necessarily clear who is a jerk and who has a controversial minority opinion. For example how do you think the bot would’ve handled the vegan community debacle that happened. There were a lot of trusted users who were not necessarily on the side of vegans and it could’ve made those communities revert back to a norm of what users think to be good and bad.

    I think giving people some insight into how it works, and ability to play with the settings, so to speak, so they feel confident that it’s on their side instead of being a black box, is a really good idea. I tried some things along those lines, but I didn’t get very far along.

    If you’d want I can help with that. Like you said it sounds like a good way of decentralizing moderation so that we have less problems with power tripping moderators and more transparent decisions. I just want it so that communities can keep their specific values while easing their moderation burden.

    Anti Commercial-AI license (CC BY-NC-SA 4.0)