Hello World,

we know one of the most important indicators of comment and post community interest is votes. Beyond that, reports help to regulate comments and posts that go beyond down-votes and into the rule-breaking territory.

Unfortunately, Lemmy’s current reporting system has some shortcomings. Here are a few key technical points that everyone should be aware of:

  • Reports are NOT always visible to remote moderators. (See Full reports federation · Issue #4744 · LemmyNet/lemmy · GitHub)
  • Resolving reports on a remote instance does not federate if the content isn’t removed.
  • Removal only resolves reports in some cases, e.g. individual content removal. Some people remove individual content before banning the user, especially when the ban is with content removal, as individual content removals resolve reports on all instances. Bans with the remove content checkbox selected will not currently resolve reports automatically.

Our moderators are also volunteers with other demands on their time: work gets busy, people take vacations, and life outside of Lemmy goes on. But that doesn’t mean content that causes distress to users should stick around any longer than reasonable. Harmful content impacts both Lemmy.world users AND users of all other instances as well. We always strive to be good neighbors.

While the devs work on the technical issues, we can work to improve the human side of things. To help keep the reports queue manageable and help reports be resolved in a REASONABLE amount of time we will be enabling global notifications for all stale reports. Open reports older than 1 day will trigger a notification for 1 week every 2 days so as not to overwhelm community moderators endlessly. If a community moderator does not wish to receive these reports, they may simply block the bot, but we still expect them to address unresolved reports.

We expect that this feature will eventually be removed once Lemmy has a better reporting system in place.

As an aside, we recommend that all LW communities have at least two moderators so there is some redundancy in resolving reports themselves, and to reduce the workload on individual moderators. We know this can be harder for smaller communities, but this is just a recommendation.

We recommend that moderators have alt accounts on Lemmy.world to resolve reports that are from federated instances.

As always feedback is welcome. This is an active project which we really hope will benefit the community. Please feel free to reply to this thread with comments, improvements, and concerns.

Thanks!

FHF / LemmyWorld Admin team 💖

Additional mods

We recommend using the Photon front-end for adding mods to your community.

With the standard lemmy-ui, additional mods can only be appointed if the account has made a post or comment in the community.

Photon (https://p.lemmy.world) does not have this limitation. You can use the community sidebar, click Gear (settings) icon, then click the Team tab, and add any user as a mod.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    24 days ago

    Whelp, I considered 8h to be long for addressing issues, but I have no life… Classic mod style 😎

    All I ask is, if you see something, flag it. That doesn’t mean I will always take action, but it is always appreciated just the same.

    A good mod is an invisible mod to almost all users. In fact, I really wish we could institute this distinction as a part of Lemmy code and culture, (but it might be untenable with people of different personalities). I wish I did not have a mod tag by default. If I choose to act as a mod, I have the option to add the tag.

    In a perfect world, if a mod interacts with users directly in a conversation, they would be blocked from escalating privileges to the mod tag and taking actions against the user. They would be required to keep an extra active mod for the community, with a unique history and IP address that would be able to address any flags they or others create. Such a system would solve a lot of bad-mod complaints that are universal to link aggregation platforms. This requirement would force objective judicial thinking that is more resistant to biases and emotions. It would also press active communities to expand their pool of franchised users by adding extra mods, likely increasing post quality and frequency by those that feel a little more motivated to lead in active posting. The last aspect I can think of off the cuff is that it would likely require an extra layer of Admin notifications and awareness in very rare instances where multiple mods have engaged within a single conversation and then it has become an actionable issue, then it would require admin actions. Really, applying such a system to everyone except the root admin owner would be a way to use checks and balances to ensure healthy balanced oversight in all instances. It is essentially rejecting the persistent root super user login and instituting a sudo type escalation layer for mods and admin but on a more social level.