Improbable Icon


Is the rendering handled by the client's device?



I have been researching quite a bit about spatial OS as I am thinking of taking a leap from my current networking model. I like everything I see about spatial. I’m only unclear about how the device handles all the drawcalls and stuff. For instance if there are 20000 people in-game and 5000 of them are in front of you battling with a lot of physics simulations happening, though the simulations are handled by spatial OS, the graphics still has to be rendered on the device right?. What if the device cant render 5000 players worth of particles and SFX. I think I’m definitely missing something here.
Also I read in some forum post where someone said with spatial OS you can play games with massive worlds and graphic intensive content on mobile phone. Does this mean we can bring the visual fidelity of a desktop computer to a mobile phone?


Welcome to the forums :slight_smile:

To clarify: SpatialOS does not stream video - as you said, the rendering is still done on the client.

This does mean when designing your massive, persistent world you need to take into account the hardware of your target market and the graphical capabilities of that hardware. As you said: if you have 20,000 people in a world - how on earth would you draw all of them!

SpatialOS allows you to configure what is currently ‘checked out’ by each client; meaning that although the world itself is massive, each client only gets updates for a much smaller portion of the game world: and while the player moves around, spatialOS handles what they should, and should no longer know about.

As an example, in worlds adrift a player is on an Island : they do not have to render all of the other 200+ floating islands that are miles away, because SpatialOS ensures you only get told about local entities and changes.

With regards to graphical intensive stuff on mobile: I think what people were trying to get at is because you don’t need to run all of the systems on the client device, like the AI or Physics, you can spend more of your CPU budget on other things…some examples off the top of my head are better occlusion algorithms or more advanced local-shadow generating techniques? I don’t think a mobile device that is running the physics + the AI would normally have time to properly extrude volumes on the fly for real time shadows but perhaps a SpatialOS client that is simply rendering the entities it is being told about will have a chance :slight_smile:

Hope this helps! Please ask for any further clarification
Cal :support:


Thank you so much for the clarification. I think I got an idea now. So the device only has worry about rendering. Each and every other calculation is handled by the workers on the server and all the data associated with an entity is sent to the client so it can be drawn at proper places.
Some more questions popped up after ur answer.

  • What exactly does spatial OS allow us to configure to make the client ‘Check out’ only a particular region of the world? Is it something like camera bounds?
  • OK another scenario where I’m near, say a ‘Base’. It has a lot of detailed interiors inside. Conventionally I would keep it in another scene and additively load that scene when the player enters. How would this scenario playout in spatial OS? Is it fine if we keep it loaded all the time? (That means it gets rendered by the device though its not seen) Or does it get loaded when a nearby player walks into it or only when you walk into it?
  • Also right now its not possible to deploy for android or iOS on spatial OS is it?



Not quite a frustum; simply a ‘radius’ set in game chunks. In the hello world client configuration, it’s currently set to 3 : which adds up to 150 meters.

You wouldn’t ‘instance’ the base as another level : the base would just exist and always have NPCs inside moving around on their own, because the workers that are running the AI logic are persistent in the cloud. As the player approached the base, you would start getting component updates for the entities inside : you don’t have to render them, but you will be getting told about their positions / current states.

We have an experimental IOS integration that is very early days but we love feedback on it!