Strategies for keeping children safe in your online community

We are at a point in the evolution of online behaviour where privacy and safety are finally getting the attention they deserve. Around the world, new laws and standards are changing how online spaces operate, especially when it comes to protecting children and other vulnerable users.
Community managers and site operators are feeling real pressure from the seemingly never-ending stream of new legislation and compliance work. For example, the latest requirements of the UK Online Safety Act have meant that some operators have opted for closing down their sites, rather than going through the new processes necessary for compliance.
We are often asked if Discourse is compliant with GDPR, DSA, OSA, etc., and our answer is always the same–compliance isn’t only about the software itself, but how it is used. We provide a suite of tools that can be configured to use Discourse in a way that is compliant with all legislation that we are aware of. How you implement them is ultimately your responsibility, but online safety and security are part of our core values, so we commit to supporting you in any way feasible.
This post is not compliance advice. It is an overview of the tools and settings available to Discourse administrators who want to run their communities in an intentional way to best protect their users from harm. You may choose to utilise some or all of these tools as part of your privacy and safety frameworks – which ones will depend on the type of community and the nature of your audience.
Work out which rules apply
Before you jump into our tools and settings it’s important to familiarise yourself with the applicable laws and regulations – be aware that they differ between countries and sometimes even states. You will need to evaluate where potential members reside, and consider the possibility of children signing up, even if your rules prohibit it. You may need to consider COPPA (US Children’s Online Privacy Protection Rule – applies to children under 13 years), and/or GDPR (EU General Data Protection Regulation – applies to children under 16 years) and other potentially relevant requirements for children’s data processing in the UK, like the Age-Appropriate Design Code. Depending on where your members are based, there may be other important legislation to consider. It is important to consult a lawyer for advice if you are unsure.
Online safety in communities with minors
Discourse hosting terms prohibit children under 13 from hosted sites. At the same time, we recognize that more and more children are connecting with each other online and with wider communities. Online safety is important and for this reason, we want to share ways administrators can proactively protect minors and other vulnerable groups. If your community is one where minors may be signing up there are some protections that you could consider putting into place to provide greater safety for them.
Privacy policy, Terms of Service, advertising and analytics
It is vital to have a robust privacy policy. Some regulations require that you have one that outlines how you collect, use, and protect personal information from children under 13.
Terms of Service which are easily discoverable and clearly documented can be used to define rules about acceptable use. In higher risk communities you may require explicit agreement to community guidelines during registration.
Your TOS may include:
- A minimum age requirement (or other audience restrictions)
- Terms specifically excluding the furnishing or soliciting of data from children
- The enforced use of real names and rich profiles
- The requirement to submit to a background check
- Sanctions for inappropriate behaviour
- Content standards
- Transparent communication about monitoring of PMs and personal chat messages
- A whistleblowing policy to encourage the reporting of suspicious members
You should also consider avoiding external advertising and analytics tools to reduce the risk of sharing personal information about your members with third parties and also simplify compliance.
Involve guardians in registration
A helpful way to encourage safety is involving guardians in your registration process by obtaining verified consent for children to join. You could also consider mandating the provision of contact details for guardians and adding parents or trusted adults as moderators to provide an additional layer of protection.
Controlling who can join
The Invite only setting can be used to lock down your forum to just select people. This is one of the best ways to ensure the safety of your members. If that setting is too restrictive you could consider using Must approve users. If you are relying on this setting to protect children or vulnerable users you will need strict, documented protocols in place for properly validating the identities of the users before you approve them.
Moderation
It is important to have active round the clock moderation. If that is not possible, place the site in read-only mode while you do not have moderators rostered.
Flagging combined with intentional moderation practices can be leveraged to manage harmful content. Make sure you publicly document your moderation policies and conduct regular training with your moderation team, including protocols for escalating to authorities. Moderation protocols should include that:
- users can flag posts for reasons such as harassment, hate speech or other harmful content
- flagged content will be reviewed by moderators or hidden automatically to reduce harm
- moderators and administrators have access and have been trained on the swift removal of problematic content on discovery, to minimise harm
- administrators have access to personal messages to review flags where necessary
Messaging and Trust permissions
On very high risk sites it may make sense to disable personal messaging and chat functionality. A less disruptive option might be to restrict which groups have messaging permissions.
Trust Levels can be utilised further to protect the safety of children by controlling permissions for younger, vulnerable or less trusted users. For instance, children could be locked to a trust level with stricter limits, like no access to send or receive personal or 1:1 chat messages.
Monitoring and Transparency
Widely publicise the visibility of all content to admins and moderators (including personal messages). Some site owners feel uncomfortable with this approach as they view it as an invasion of perceived privacy, but it is important to remember that you are responsible for the safety of your members, so it should be a serious consideration if you run a high risk site.
Watched Words can be used with filters to automatically block or replace specific words or expressions. Use the ‘require approval’ or ‘flag’ methods to trigger moderation actions against harmful terminology. Prepopulated lists found online can be uploaded. For extra security, use the ‘silence’ method, which will send the post for moderator approval if it is the first post of a new user and contains a trigger word. This enables admins to proactively screen potentially risky new members.
AI triaging can be used in conjunction with Discourse Automation to optimise and automate moderation efforts. Toxicity Detection enables you to classify posts as ‘toxic’ or ‘positive’ based on various criteria (e.g., threats, hate speech, personal attacks) and NSFW Detection is a powerful tool for identifying and handling problematic images effectively.
If you use Discourse Chat, the Block, Censor, Link and Replace Watched Words functionality can be enabled to actively monitor chat conversations for risk vectors.
Educate your members
All members, including children, should be regularly educated on safety protocols and flagging harmful or inappropriate content. It is also important to have prominently displayed and clearly signposted rules, user responsibilities and moderator contact channels. In high risk communities you may want to consider having members re-accept your terms annually to keep their accounts active.
Stay vigilant
While these strategies go a long way toward securing your community, your best line of defence is rigorous and proactive moderation. Manual vetting and validating of registrations, tracking user activity to identify potentially malicious users, and actively (and visibly) monitoring all communication channels are crucial practices which should not be overlooked.
Special data regulations
On CDCK hosted plans we prohibit the furnishing or soliciting of data which is subject to Special Data Regulations, which includes data from children under 13 years old (COPPA). If you plan to use an external hosting service and expect children to access your site, please pay close attention to the terms and conditions under which you are signing up. If you self-host your site you will need to be aware of any data protection and transfer laws which apply.
Final thoughts
Building a safe online community requires a multifaceted approach. By utilising the available tools within Discourse, staying informed on relevant legislation, and prioritising proactive moderation, site owners can create an environment that protects its members while fostering healthy online behaviour.
The key is intentionality: thoughtfully selecting and implementing safety measures based on the specific needs and risks of your community to foster a positive and secure online space for all.