Matchmaking uses a round-robin sharding approach: each room is always handled by the same backend instance, letting me keep game state in memory and scale horizontally without Redis.
Here’s the issue: At ~500 concurrent players across ~60 rooms (max 8 players/room), I see low CPU usage but high event loop lag. One feature in my game is typing during a player's turn - each throttled keystroke is broadcast to the other players in real-time. If I remove this logic, I can handle 1000+ players without issue.
Scaling out backend instances on my single-server doesn't help. I expected less load per backend instance to help, but I still hit the same limit around 500 players. This suggests to me that the bottleneck isn’t CPU or app logic, but something deeper in the stack. But I’m not sure what.
Some server metrics at 500 players:
- CPU: 25% per core (according to htop)
- PPS: ~3000 in / ~3000 out
- Bandwidth: ~100KBps in / ~800KBps out
Could 500 concurrent players just be a realistic upper bound for my single-server setup, or is something misconfigured? I know scaling out with new servers should fix the issue, but I wanted to check in with the internet first to see if I'm missing anything. I’m new to multiplayer architecture so any insight would be greatly appreciated.
Is there any cross-room communication? Can you spawn a process per room? Scaling limited at 25% CPU on a 4 vcpu node strongly suggests a locked section limiting you to effectively single threaded performance. Multiple processes serving rooms should bypass that if you can't find it otherwise, but maybe there's something wrong in your load balancing etc.
Personally, I'd rather run with fewer layers, because then you don't have to debug the layers when you have perf issues. Do matchmaking wherever with whatever layers, and let your room servers run in the host os, no containers. But nobody likes my ideas. :P
Edit to add: your network load is tiny. This is almost certainly something with your software, or how you've setup your layers. Unless those vCPUs are ancient, you should be able to push a whole lot more packets.
Try buffering the outgoing keystrokes to each client. Then, someone typing "hello world" in a server of 50 people will use 50 syscalls instead of 550 syscalls.
Think Nagle's algorithm.
> This suggests to me that the bottleneck isn’t CPU or app logic, but something deeper in the stack
Just a word of caution - I have seen plenty of people speed towards eg "it must be a bug in the kernel" when 98% of the time it is the app or some config.
import { performance, EventLoopUtilization } from 'node:perf_hooks'
performance.eventLoopUtilization()
See the docs for how it works and how to derive some value from it.We had a similar situation where our application was heavily IO bound (very little CPU) which caused some initial confusion with slowdown. We ended up added better metrics surrounding IO and the event loop which lead to us batch dequeuing our jobs in a more reasonable way that made the entire application much more effective.
If you crack the nut on this issue, I'd love to see an update comment detailing what the issue and solution was!
* the number of total sockets as I suspect there could be multiple sockets per user.
* investigate what socket.io does to serialize messages both on and off the wire. I wrote my own WebSocket library for Node and noticed the cost to process messages on the receiving end is about 11x greater than on the sending end. Normally that doesn’t matter until you push it past a critical point. At the critical point everything begins to super crawl because the message quantity per interval exceeds the garbage collection cycle and everything backs up. In my case this scenario didn’t realize until 180000 or 480000 messages per second depending upon the hardware. The critical difference from the hardware side was only about memory speed and cpu availability was largely irrelevant.
* also look at what socket.io does, if at all, to queue messages at each side of the socket. Message queued both on and off the wire will be a factor if not properly managed or if absent
Anyway please follow up or blog when you solve it. Sounds interesting.
I noticed, for example, adding a newrelic agent drops http throughput almost 10x.
I haven't tried Swarm, but to some degree assume it can give the same effects as Docker Compose with several services. I also less sure of the effects if you never have communication between containers, but I think perhaps there may still be the same or similar issue.
What I experienced when doing not exactly a load test, but just processing a large dataset through multiple docker containers started from a docker compose config, was that the default docker network loopback (docker0) was saturated. After creating a docker network that the various nodes were configured to use, things got a lot better.
So this is the question for you, do all the containers in the swarm talk via docker0? If yes, read up on docker networks in relation to swarm in particular.
I ended up figuring out a fix but it's a little embarrassing... Optimizing certain parts of socket.io helped a little (eg installing bufferutil: https://www.npmjs.com/package/bufferutil), but the biggest performance gain I found was actually going from 2 node.js containers on a single server to just 1! To be exact I was able to go from ~500 concurrent players on a single server to ~3000+. I feel silly because had I been load-testing with 1 container from the start, I would've clearly seen the performance loss when scaling up to 2 containers. Instead I went on a wild goose chase trying to fix things that had nothing to do with the real issue[0].
In the end it seems like the bottleneck was indeed happening at the NIC/OS layer rather than the application layer. Apparently the NIC/OS prefers to deal with a single process screaming `n` packets at it rather than `x` processes screaming `n/x` packets. In fact it seems like the bigger `x` is, the worse performance degrades. Perhaps something to do with context switching, but I'm not 100% sure. Unfortunately given my lacking infra/networking knowledge this wasn't intuitive to me at all - it didn't occur to me that scaling down could actually improve performance!
Overall a frustrating but educational experience. Again, thanks to everyone who helped along the way!
TLDR: premature optimization is the root of all evil
[0] Admittedly AI let me down pretty bad here. So far I've found AI to be an incredible learning and scaffolding tool, but most of my LLM experiences have been in domains I feel comfortable in. This time around though, it was pretty sobering to realize that I had been effectively punked by AI multiple times over. The hallucination trap is very real when working in domains outside your comfort zone, and I think I would've been able to debug more effectively had I relied more on hard metrics.