Our Discourse Hosting Configuration

Michael Brown January 27, 2015

We’ve talked about the Discourse server hardware before. But now let’s talk about the Discourse network. In the physical and software sense, not the social sense, mmmk? How exactly do we host Discourse on our servers?

Here at Discourse, we prefer to host on our own super fast, hand built, colocated physical hardware. The cloud is great for certain things but like Stack Exchange, we made the decision to run on our own hardware. That turned out to be a good decision as Ruby didn’t virtualize well in our testing.

The Big Picture

(Yes, that’s made with Dia – still the quickest thing around for network diagramming, even if the provided template images are from the last century.)

Tie Interceptors

Nothing but the finest bit of Imperial technology sits out front of everything, ready to withstand an assault from the Internet at large on our application:

The tieinterceptor servers handle several important functions:

  • ingress/egress: artisinally handcrafted iptables firewall rules control all incoming traffic and ensure that certain traffic can’t leave the network
  • mail gateway: the interceptors are responsible for helping to ensure that mail reaches the destination. DKIM signing, hashcash (potentially) signing, header cleanup
  • haproxy: using haproxy allows the interceptors to act as load balancers, dispatching requests to the web tier and massaging the responses
  • keepalived: we use keepalived for its VRRP implementation. We give keepalived rules such as “if haproxy isn’t running, this node shouldn’t have priority” and it performs actions based on those rules – in this case adding or removing a shared IPv4 (and IPv6) address from the Internet-facing network

Tie Fighters (web)

The tiefighters represent the mass fleet of our Death Star, our Docker servers. They are small, fast, identical – and there are lots of them.

They run the Discourse server application in stateless docker containers, allowing us to easily set up continuous deployment of Discourse with Jenkins.

What’s running in each Docker container?

  • Nginx: Wouldn’t be web without a webserver, right? We chose Nginx because it is one of the fastest lightweight web servers.
  • Unicorn: We use Unicorn to run the Ruby processes that serve Discourse. More Unicorns = more concurrent requests.
  • Anacron: Server scheduling is handled by Anacron, which keeps track of scheduled commands and scripts, even if the container is rebooted or offline.
  • Logrotate and syslogd: Logs, logs, logs. Every container generates a slew of logs via syslogd and we use Logrotate to handle log rotation and maximum log sizes.
  • Sidekiq: For background server tasks at the Ruby code level we use Sidekiq.

Shared data files are persisted on a common GlusterFS filesystem shared between the hosts in the web tier using a 3-Distribute 2-Replicate setup. Gluster has performed pretty well, but doesn’t seem to tolerate change that well – replacing/rebuilding a node is a bit of a gut-wrenching operation that kind of feels like yanking a disk out of a RAID10 array, plugging a new one in and hoping the replication goes well. I want to look at Ceph as a distributed filesystem store – able to provide both a S3-like interface as well as a multimount POSIX filesystem.

Tie Fighters (database)

We are using three of the Ties with newer SSDs as our Postgres database servers:

  1. One hosts the databases for our business class (single Discourse application image hosting many sites) containers and standard-tier plans.
  2. One hosts the databases for the enterprise class instances.
  3. One is the standby for both of these – it takes the streaming replication logs from the primary DBMSes and is ready to be promoted in the event of a serious failure.

Tie Fighter Prime

The monster of the group is tiefighter1. Unlike all the other Ties, it is provisioned with 8 processors and 128 GB memory. We’ve been trying to give it more and more to do over the past year. I’d say that’s been a success:

Although it is something of a utility and VM server, one of the most important jobs it handles is our redis in-memory network cache.

Properly separating redis instances from each other has been on our radar for a while – they were already configured to use separate databases for partitioning, but that also meant that instances could still affect each other. Notably the multisite configurations which connected to the same redis server but used different redis databases.

We had an inspiration: use the password functionality provided by the redis server to automatically drop any connections using the same password into its own isolated redis backend. A new password will automatically create a new instance specifically tied to that password. Separation, security, ease of use. A few days later, Sam came back with redismux. It’s been chugging along after being moved inside docker in September.


Jenkins is responsible for “all” of our internal management routines – why would you do tasks over and over when you can automate them? Of course, the elephant is our build and deployment process. We have a series of jobs set up that automatically run on a github update:

Total duration from a push to the main github repository to the code running in production: 12 minutes, 8 of which are spent building the new master docker image.

It’s taken us about a year (and many, many betas of Docker and Discourse) to reach this as a reasonably stable configuration for hosting Discourse. We’re sure it will change over time, and we will continue to scale it out as we grow.

Leave a reply

Discourse 1.1 and Release Schedule

Jeff Atwood November 10, 2014

In September, the entire Discourse Team had our yearly world meetup in Toronto. It was a blast!

Robin and Neil (not pictured, he has a thing with heights) hosted us and made sure a fun time was had by all in their home city. We rode segways, we climbed the tallest tower, we were trapped in puzzle rooms, we even killed a hobo. We also discussed the Discourse roadmap post V1, and how we decide what goes into future releases of Discourse.

The next stage of that plan is now complete. As of late last week, we shipped Discourse 1.1!

The complete release notes have a detailed summary of the hundreds of fixes, UI improvements, feature tweaks, and new features in Discourse 1.1, but here are a few highlights:

Improved Search

Search now provides a lot more feedback, including dates, category, and bolded matches in context.

There’s also finally a help link on search which describes all the custom operators and orders you can use, as well as providing general search tips.

Custom User Fields

You can specify custom boolean or text fields for user profiles, including fields that need to be captured at sign up time.

New User Cards

New user cards with customizable backgrounds and selectable badge images, if you hold a badge that has an eligible image associated. The user profile page also got some design updates.

1.1 is a polish release and reflects a stabler, faster, more secure Discourse. It’s what 1.0 should have been, but open source software is never “done”. Upgrade your instance today via our easy one click admin panel updater!

We’d like to thank the entire Discourse community for all their contributions toward this release, whether it was in pull requests, feedback on meta.discourse, or feedback on your own Discourse instance. Heck, we even listen to our customers, sometimes!

For insight into what’s coming up in future releases of Discourse, keep an eye on the releases category at meta discourse.

4 Replies

Introducing Discourse 1.0

Jeff Atwood August 26, 2014

Today we are incrementing the version number of Discourse to 1.0.

We’ve been working on Discourse in public for about a year and a half now – since February 2013. Turns out that’s about how long it takes to herd an open source project from “hey, cool toy” to something that works for most online discussion communities.

It’s a bit like building an airplane in flight.

Version numbers are arbitrary, yes, but V1 does signify something in public. We believe Discourse is now ready for wide public use.

That’s not to say Discourse can’t be improved – I have a mile-long list of things we still want to do. But products that are in perpetual beta are a cop-out. Eventually you have to ditch the crutch of the word “beta” and be brave enough to say, yes, we’re ready.

So that’s what we’re doing.

In working with the community, in working with our 3 initial partners, in working with our early customers, we’ve gained a lot of confidence that we’ve refined Discourse into something that is safe, complete, has all the rough edges smoothed, and is finally ready for use by everyone:

We’re also, at long last, unveiling our hosting service and install service:

If you’re looking for a world class host to get started with Discourse, why not choose the people that know Discourse best?

As an open source project, we wouldn’t be where we are today without our community, so many thanks are in order:

  • Thanks most of all to the people who believed in Discourse enough to operate and maintain an active Discourse instance. You’re closest to the metal and we always, always highly prioritize your feedback.
  • Thanks to our early customers who saw value in Discourse and were willing to take a leap of faith with us and help build a beta product. Money is the ultimate form of support, and it’s essential to the survival of the project. It’s also amazing how many things we learned when really digging into setups with our early customers.
  • Thanks to everyone who participated on meta.discourse and provided feedback, reported bugs, or discussed features with us. Discourse is better because you spent the time with us to help improve it for everyone. We appreciate that.
  • Thanks to our many contributors and collaborators who submitted pull requests to the Discourse project on GitHub. Any open source project is only as good as its contributors, and one of our continuing goals is to make it easier and easier to contribute to Discourse as we go.
  • Thanks to everyone who used Discourse. Ultimately Discourse is a platform for having fun while communicating with your fellow human beings – building a simple, satisfying user experience has always been our number one priority. There’s no party when nobody shows up!

We’ve come a long way, and we’ve worked hard to get here, but we still have a very long way to go. Here’s to the next 8½ years of our 10 year plan to raise the level of discourse on the web. Join us. We’d love to have you.

44 Replies