On Building Communities in Public: Why I Chose Discourse Over Discord

When I first started thinking about building a community around my writing, I did what any reasonable person would do: I looked at what everyone else was doing. Discord servers everywhere. Slack workspaces for the more “Serious Minded.” The occasional subreddit. Everyone seemed to have figured out the formula, and the formula was: pick the platform with the most features, the best mobile app, and the lowest friction for new members.
I almost went with Discord. I had the server half-configured. I'd mapped out channels for different topics, set up roles, even drafted welcome messages. Then I stopped and asked myself a question that should have been obvious from the start: who actually controls this space I'm building?
The answer made me uncomfortable. So I chose Discourse instead, and I want to tell you why. But first, I need to tell you about a coding bootcamp, a game of risk with Mark Zuckerberg, and how Reddit moderation destroyed a $23.5 million company.
The Curious Case of Codesmith and the Moderator
In 2019, Michael Novati co-founded Formation, a developer bootcamp and training company. Novati had an impressive pedigree: early Facebook engineer, principal-level, a person who played strategy board games with Zuckerberg and somehow turned backstabbing his boss in Risk into a friendship. (He tells this story himself, apparently with pride.) Formation raised $4 million from Andreessen Horowitz. By all accounts, Novati understood how to play competitive games.
At some point, Novati became a moderator of r/codingbootcamp, the main Reddit community for the coding bootcamp industry. The other moderators were essentially inactive. Which meant Novati, a co-founder of a bootcamp company, now controlled the primary online forum where prospective students researched bootcamps.
If you've spent any time thinking about incentive structures, you can probably see where this is going.
One of Novati's main competitors was Codesmith, a bootcamp founded by Will Sentance, a guy whose background was in pedagogy and whose parents were teachers. By all accounts, Codesmith was one of the good bootcamps. They had strong student outcomes, a rigorous application process, and genuine testimonials from people whose lives had been changed. Their peak revenue hit $23.5 million.
Then the attacks started.
Over 487 days, Novati made 425 negative posts about Codesmith on Reddit. That's 0.87 negative posts per day. Nearly one attack every single day for over a year. The posts compared Codesmith to NXIVM (the sex cult from "The Vow"). They accused employees of nepotism. They turned innocuous resume advice into grand conspiracy theories about fraud. When a student posted a glowing review saying "Codesmith changed my life," Novati responded that such testimonials reminded him of what he heard from cult members, planting doubt while maintaining plausible deniability.
The result: when you Google "Codesmith" today, the second result is a Reddit thread titled "Codesmith is an enormous waste of money." When you ask ChatGPT whether Codesmith is a good bootcamp, it regurgitates the same Reddit threads and conspiracy theories. Codesmith's revenue dropped 80 percent. Will Sentance stepped down as CEO, but the attacks continued. Employees left. Some developed mental health issues. Students who had great outcomes became afraid to post about their experiences.
All of this happened on Reddit, a platform that positions itself as an authentic community forum, governed by volunteer moderators acting in good faith. And technically, Novati wasn't violating any explicit rules. He was just a very active moderator who happened to be systematically destroying his competitor's reputation through the platform he controlled.
The Temptation of Opaque Moderation
What bothers me most about the Codesmith story: it wasn't really about Reddit. Reddit was just the first platform where this playbook became obvious. The structure that made it possible exists everywhere.
Discord servers have the same problem. Someone owns the server. That person appoints moderators. Those moderators can delete messages, ban users, and shape the narrative with almost zero accountability. If you build your community on someone else's Discord server, you're subject to their whims. If you build your community on your own Discord server, you have that same power over everyone else, and they have no way to verify what you're doing.
Slack is marginally better if you're paying for it, but the moderation model is identical. Someone has admin rights. That person can see everything, delete anything, and there's no public record of what happened. The transparency is zero.
Even subreddits, which at least have the advantage of being public, suffer from the same fundamental issue: moderation power is concentrated and opaque. You don't know what posts got deleted unless you're using specialized tools. You don't know why someone got banned unless the moderators feel like explaining. And if the top moderator decides to start steering the community in a particular direction, there's essentially nothing anyone can do about it.
This matters more than it used to. We've built an information ecosystem where these platforms are treated as authoritative sources. Google ranks Reddit threads at the top of search results. LLMs are trained on Discord logs and Slack archives. The line between "online community" and "source of truth" has dissolved. When a moderator shapes a narrative on these platforms, they're not just affecting a small group of users anymore. They're poisoning the well that everyone drinks from.
What the Republic Can Teach Us About Forums
The Romans had a concept called the cursus honorum, the sequence of public offices that ambitious men would climb. You couldn't just jump to the top. You had to hold lesser magistracies first, building a track record, demonstrating competence, earning the trust of your peers. The system wasn't perfect (obviously, given how the Republic ended), but it embedded a crucial insight: power should be visible, accountable, and earned through demonstrated trustworthiness.
Modern platform moderation has none of these characteristics. Someone creates a Discord server and immediately has absolute power over it. Someone becomes a Reddit moderator through random chance or by asking nicely, and suddenly they control a community of thousands. There's no track record required, no accountability mechanism, no way to remove bad actors unless they violate the platform's terms of service in especially egregious ways.
What we've built, in other words, is a system of tiny dictatorships masquerading as communities.
I thought about this a lot when I was deciding where to host my community. Every platform I looked at had the same basic structure: centralized control, opaque moderation, no accountability mechanisms beyond the goodwill of whoever happened to be in charge. And I kept thinking: if I build something here, I'm asking people to trust that I'll always be a benevolent dictator. But I don't want to be a dictator at all, benevolent or otherwise.
Why Discourse Is Different (And Why That Matters)
Discourse isn't perfect. No platform is. But it has some structural features that make it harder to pull a Codesmith-style attack.
First, and most importantly, everything is public by default. When a moderator takes action, there's a record. When a post gets deleted, it leaves a visible trace. When someone gets banned, the community can see it happened. This doesn't prevent bad moderation, but it makes it visible, which creates some level of accountability.
Second, Discourse has a trust level system built in. Users earn privileges over time based on their participation. This means moderation isn't just handed to whoever asks for it. Regular community members can flag posts, and if enough trusted users flag something, it gets automatically hidden pending moderator review. Power is distributed rather than concentrated.
Third, the entire platform is open source. If I genuinely abuse my position as the person running the forum, anyone can see exactly what I'm doing. If things get bad enough, someone could fork the whole community and start over elsewhere. The nuclear option exists, which means I have real skin in the game to maintain trust.
But here's the thing I keep coming back to: these technical features matter less than the culture they enable. Discourse is designed for communities that want to operate in public, with visible norms and accountable leadership. Discord and Slack are designed for private groups where the social dynamics can be whatever the group decides. Neither is inherently better, but they optimize for different things.
I wanted to build something in public. Where moderation decisions are visible. Where power is distributed. Where if I ever turn into Michael Novati, systematically attacking my perceived enemies and warping the community around my personal vendettas, everyone can see it happening and has the tools to push back.
The Digital Commons Problem
There's a broader issue here about how we think about online spaces. We've imported the language of physical communities, talking about "public squares" and "gathering places" and "town halls." But the actual structure of these platforms resembles nothing like a town hall. They're more like shopping malls that happen to have benches where people gather. The space is privately owned, the rules are arbitrary, and management can kick you out at any time for any reason.
This creates a tragedy of the commons in reverse. Instead of a shared resource being depleted through overuse, we have shared social resources being corrupted through concentrated control. The community creates value through participation, but the platform and its moderators can extract that value, distort it, or destroy it.
Reddit's model is particularly insidious here. The platform positions itself as democratically governed by volunteer moderators who are just members of the community. But those moderators have sweeping powers with minimal oversight, and Reddit's administrators rarely intervene unless things become PR disasters. What you get is a system that feels participatory but can be captured by anyone willing to invest the time to become a moderator and then abuse that position.
The Codesmith situation is remarkable not because it's unique, but because it became visible. How many other subreddits are being quietly steered by people with undisclosed conflicts of interest? How many Discord servers are being used to orchestrate reputation attacks? How many Slack workspaces have become echo chambers where dissent gets quietly deleted? We have no way of knowing, which is precisely the problem.
On Not Becoming the Dragon
When I was setting up my Discourse forum, I gave myself a lot of moderator powers. I can delete posts, ban users, edit other people's writing. All the usual stuff. Every time I look at those controls, I'm acutely aware that I could use them to shape narratives, silence critics, and create a version of reality that serves my interests.
The only thing stopping me is my own sense of integrity and the knowledge that everything I do is visible. That's a thin protection, honestly. Integrity is nice, but incentives are more reliable. If I screw up, people can see it, call it out, and leave. The reputational cost is high. The community has some power to push back.
This is a much weaker guarantee than I'd like. In an ideal world, online communities would have robust governance mechanisms, democratic accountability, and ways to remove bad actors without nuking the whole space. We're not there yet. What we have instead is a spectrum of platforms, some more transparent than others, and we have to pick the least bad option.
For me, that option was Discourse. Not because it solves all these problems, but because it makes them visible. When I delete a post, people can see that it was deleted. When I ban someone, the ban is logged. When I'm making decisions about how the community should run, those discussions happen in public threads where everyone can participate.
Could I still abuse this system? Of course. But it would be a lot harder to do what Michael Novati did, systematically destroying someone's reputation while maintaining plausible deniability, purely by virtue of controlling a platform. The structure of Discourse makes that kind of sustained, hidden manipulation much more difficult.
What Comes Next
I don't think this essay will convince anyone to abandon Discord or Slack if those platforms are working for them. They're good tools for certain purposes. Private coordination, real-time chat, ephemeral conversations that don't need to be preserved. All fine use cases.
But if you're building a community that's meant to be a resource, a knowledge base, a place where people come to learn and discuss ideas that matter, you should think carefully about the governance structure. Who has power? How is that power checked? What happens when someone abuses it? Can the community see what's happening?
These aren't abstract questions. They have real consequences. Will Sentance built something valuable, something that was genuinely helping people change their lives, and watched it get systematically destroyed by someone with a conflict of interest and a moderator badge. His employees developed mental health issues. Students became afraid to share their success stories. The damage rippled outward in ways that are still hard to fully quantify.
And here's the uncomfortable truth: Novati's tactics worked. He executed a nearly perfect reputation attack. He stayed just inside the bounds of plausible deniability, never quite violating Reddit's rules in ways that would get administrators involved. He maintained a veneer of concern for bootcamp students while systematically demolishing a competitor. It was brutal, effective, and entirely enabled by the structure of the platform.
Every company is vulnerable to this now. Every community is one bad moderator away from corruption. Every online reputation is subject to the whims of whoever controls the platforms where people search for information. This should terrify us more than it does.
I chose Discourse for my community because I wanted to build something that couldn't be quietly corrupted. Where power is visible, distributed, and accountable. Where if I ever turn into the villain of my own story, people can see it happening and have tools to respond. It's not a perfect solution. But it's better than trusting my own benevolence or hoping that platform incentives will magically align with community wellbeing.
The question isn't whether online communities need governance. They do, always have, always will. The question is whether that governance happens in public, with accountability, or in private, subject to the arbitrary whims of whoever managed to accumulate power. I know which one I prefer. I know which one makes me less likely to wake up one day and discover that someone has spent 487 days systematically destroying something I built.
So: Discourse. Public moderation. Visible decisions. Distributed trust. It's not much, but it's what we have. And in a world where Reddit moderators can destroy businesses and ChatGPT regurgitates conspiracy theories as facts, I'll take every small structural advantage I can get.
Maybe that makes me paranoid. Or maybe I just played enough Risk to know what happens when you let someone else control the board.