Five Minute Chat offers advanced moderation tools including Discord integration and AI assistance. Features include: getting notified of user conduct reports, managing and actioning them within Discord, automatically assessing and reporting user misconduct, leveraging powerful language translation features, and AI support agents.
The Discord Moderator Bot integrates directly with Five Minute Chat to provide seamless moderation capabilities. It uses Discord slash commands exclusively. Through the bot, moderators can deal with reported messages, look up chat context, and perform moderation actions like direct messaging, silencing, kicking, or banning users.
To get started with the Discord Moderator Bot, you first need to invite the Fiveminutes.io Moderator Discord bot to your Discord server. The documentation provides steps for getting it up and running.
It is recommended to assign the bot its own channel by denying it access to all channels except for one or a few specific channels where you intend to interact with it. The bot currently does not support user authentication and will accept any command from any user in any channel it is a member.
To allow the moderator bot to interact with your Five Minute Chat application, you need to register the application with your Discord server. This is done by submitting a command to the bot: `/admin register-app `, using the application ID and secret you use in your Unity project.
The Discord Moderator Bot supports various slash commands for administrative tasks (e.g., `/admin register-app`, `/admin list-apps`), application-related tasks (e.g., `/apps list-channels`, `/apps recent-history`), interacting with reported messages (e.g., `/details `, `/context `, `/close `), support ticket related commands (e.g., `/respond `), and interactions with users (e.g., `/ban-globally `, `/silence `, `/message `). Help commands like `/commands` are also available.
When a user reports a message in your game, the system reports it in Discord along with a unique ID. Moderators can then use commands like `/details ` to retrieve message details, `/context ` to view chat history around it, `/open ` or `/close ` to manage its status, and `/assign ` to assign it to themselves.
Moderators can take various actions, including: direct messaging users, silencing users globally or in specific channels (temporarily or permanently), kicking users from channels, and banning users globally or from specific channels (temporarily or permanently). They can also unban or unsilence users, potentially on a scheduled basis.
Moderators can either censor or delete offending messages. Censoring replaces the message content with a placeholder indicating it was removed by a moderator, while deleting removes the message entirely from the chat history. Censored messages have their original content viewable by moderators but not by regular users.
Yes, Five Minute Chat's moderation tools integrate powerful language translation features. This allows moderators to understand and act on conversations in most languages, even if they don't speak them, through on-demand and automatic translations.
Yes, Five Minute Chat provides AI Assistance to automatically assess and report user misconduct. It also offers AI support agents to augment support request management, shortening time-to-feedback for users. For your own application, any and all AI processing is disabled by default and requires opt-in by the app developer.
Yes, content filtering is available either as a static text filter, or in the form of automated moderation flagging (AI-based). This is configured using the web dashboard, as client-side API configuration for filtering has been deprecated. The functionality for auto-analysis (AI moderation reports) is available, and keyword-based filtering can be toggled via the dashboard. AI processing for your own application is disabled by default and requires opt-in.
Yes, AI-based moderation is available. It scans messages for potential violations and can take generate moderation reports automatically, which human moderators can be alerted to and act on.
No, AI processing is opt-in and disabled by default for your own application. You can choose to enable it via the web dashboard if you wish to have messages scanned for potential violations.