Hi @Swizzlewizzle, the answer to this is a little nuanced, as it really depends on the kind of worker you’re dealing with.
There are two important optimisation concepts to think of:
- How inter-dependent your data is
- How parallel your computations are
Point (1) is important because we can reduce unnecessary data duplication (why send the same data over the network to two workers to do similar work?).
Point (2) is important because the locations of the bottlenecks in your computation can influence where you want to parallelise.
Let’s take a few examples of kinds of workers you might have in your game:
1. An inventory worker
This worker manages player inventories: the items they’ve picked up and dropped, validates against item duplication issues, and maybe updates some external database so that players can view prices of items offline.
In this case, every item the worker is dealing with is independent of all the other items. Parallelising is simple, because you just handle each item individually.
In this case, imagining a worker as being a ‘thread’ is a perfectly valid assumption, and you can simply start a lot of them to handle your load.
2. A physics worker
This worker simulates a physical region of space. There are several ways you can exploit multiple threads to make your simulation more efficient. However, you do get benefits from having a locality of the world (e.g. all objects within a 1km by 1km box) on the same worker, because you want to avoid density problems. You don’t want workers ‘fighting’ over a region of space, as they all need to synchronise over the network with each other for entities that they share. In this case, having one “heavy” well parallelised worker has an advantage over having several “lighter” workers.
3. A pathfinding worker
This worker finds a route through some obstacles from point A to point B. While calculating any given route is perfectly parallelisable, pathfinding benefits from caching. If you want to make your computations less expensive, you can cache routes (or even better, parts of routes), locally on your worker. This means that having a single worker might have benefits over having several workers, even if each calculation itself is reasonably parallel.
There are some other considerations around how you want to set up your workers’ threading models. Some aspects of gameplay, e.g. physics, benefit from having threading models that keep the simulation smooth (aiming for a constant framerate). This would mean isolating your handling of network I/O, from background tasks, from actually providing frames at a constant rate.