Improbable Icon


About SpatialOS pricing




More up-to-date information is available in this new post. The content of this thread has fallen out of date and should not be used for any type of reference.


We’ve had lots of questions around pricing since we released the alpha, so ahead of its official release targeted for early February, I wanted to share our plans for the pricing for SpatialOS.

So how will the pricing of SpatialOS work?

  • You’ll only be charged for what you use in cloud deployments of your game: how many workers the deployment uses, the number of users connected, and how long the workers and users are on for*.
  • Specifically, we’ll calculate the number of worker-hours and the number of user-hours used up during the deployment. This will vary with the scale and complexity of your game, and the number of people playing it.
  • Different types of workers will be priced differently, reflecting their different compute and support requirements. We expect to have pricing for Unity3D workers and C++ workers from the start; and we’ll add pricing for new types of worker as they become available.
  • Pricing of user-hours will reflect the external bandwidth costs and support costs for client workers. It’s likely we’ll have different types of user-hour pricing, based on the requirements of different types of games (for example, bandwidth requirements will depend on their level of fidelity).
  • You won’t be charged for what you need to build games: the SDK, the tools to run and understand a SpatialOS deployment, the support forums. These will all come included with your SpatialOS license.
  • We’re also aiming not to charge for very small cloud deployments running for a limited duration, so everyone can experiment with cloud deployments.

We’re working to release the full details of pricing in early February, including price points and the first set of worker pricing. We’ll have more details to share on that closer to the date.

Let me know what you think, and if you have any questions.

*Workers are the microservices that perform the computation for a SpatialOS game world, and can be games engines or your own custom workers. Users, or ‘clients’, are a particular type of worker that players use to connect to a game.

The prices
Pricing Primer!

Any chance of ending the nepotism towards Unity? Favorable pricing, supported SDK, tutorials and unity centered development seems they got their share already. Never mind using the language choice as some sort of cost metric is poor show. Just cause its C++ or C# has no inherit cost difference.

What about those of us that already have the bare metal for hosting and only want the spatialOS control plane?

Specifically, we’ll calculate the number of worker-hours and the number of user-hours used up during the deployment

That makes no sense without context of what constitutes worker-hours and user-hours. Sounds like double dipping billing.and overly complex. Bill based on usage on the near (server) end, including the cost of bandwidth, not some silly calculation based on number of users and time connected.

All I basically was able to gleam from this post was: We will be billed, something.


Agree with above
There needs to be more specific pricing points, if my app doesn’t make enough money to cover the cost of the servers this a done deal from the get go.
Can you offer some sort of example when talking about games that are already out there?
Like comparing it to what realm of the mad god vs Worlds Adrift vs Runescape would be costing as the users increase, or decrease.

I figure this will take a quite a bit of work, but the amount of data you have access to for the size of the games maps, and the user density and complexity of interactions should provide insights into the amount of workers you’d need to be using to pull off a game of that scale; i’d imagine this’ll transfer quite nicely into cost.

What i did understand from this however, was that the pricing will scale with the amount of users and scale of the world.
This makes a ton of sense from pricing point as with an increase in users, you could have increase in sales, having a sort of proportional scaling cost as the game grows. Is it possible that cost of the servers can grow multiplicatively/exponentially as the user base grows though?

But yeah, some more specific examples if possible as to what we can expect from said costs?


@paultech I am not sure whether you wanted the tone of your message to sound negative; if so: great success!

Although I do agree that there are no concrete numbers in the post above, for which an explanation was provided, I have to add that there is a significant difference between having to run a Unity-based worker and a C++ based worker. A Unity-based worker means having to run Unity and the physical simulations that come with it and is generally more CPU and memory-intensive. As such I find it easy to imagine that this requires larger instances to run, and as such higher cost.

Calling nepotism on something (p.s. consult a dictionary first before using expensive words) can hardly be called constructive. The platform is primarily geared towards the Unity platform and if anything I applaud their efforts in wanting to be inclusive to UE. I only hope that you would join me in this opinion, instead of focussing on SpatialOS’ core platform not meeting your needs yet.


@solsnare although I am in no way affiliated with Improbable, nor have more insight into the figures, I can share my experiences from developing with SpatialOS. One of the things that you may want to be mindful of is scale. You could simulate a huge area but the limits of 32-bits floats in Game Engines mean that every 20.000 to 40.000 units of distance you would need a physical simulation worker.

So, suppose your game world is 100.000x100.000x100.000 then that would mean you’d need 125 (555) servers at worst and 8 (222) servers at best. And this is just to make sure that every piece of land is covered by a server for physics.

This means that how you make your game will heavily influence the cost that it will bring. In addition to this your workers will multiply when there is more load, having a large number of entities means that more servers are needed. Be somewhat scarce with what you promote to an entity, storing some things in a data structure and rendering them just client side, or better yet using procgen, can save you some costs as well. Just as off-loading some work from your physical machines to ‘lighter’ workers such as a C++ or C# worker.


Negative/Positive, I did not attempt to skew the message in anyway, it was feedback to help the team in figuring out best pricing model and how the community feels. The post gave very little information and left more questions than it answered.

A Unity-based worker means having to run Unity and the physical simulations that come with it and is generally more CPU and memory-intensive

(We are ignoring the fact that Unity itself is written in C++ or that a Unreal engine worker would be equal to Unity in resources? but I digress)

and a C++ worker can’t be resource intensive why? My point was simply language based pricing makes zero sense, they are attempting to bill on resource usage, that’s acceptable and understood but the difference between C++ and unity workings as it pertains to resources usage is not a good mapping. Billed based on the CPU/memory/IO usage, not what it was coded in.

improbable and Unity have a long standing relationship of mutal benefit, nepotism does a fine job of relaying the point I was trying to push across. It’s plenty constructive, it’s providing feedback that developers outside Unity want to be part of the food-chain.

Happy note: I love SpatialOS and see it being a huge player in the field over the coming years but it has some big hurdles to overcome to actually penetrate…


I interpret the differentiation of Unity vs. C++ not as one based on language but examples of resource consumption without having to resort to hard numbers. Based on the company’s use of semantics in documentation and outings I feel that C++/C#/Javascript are regarded as ‘light utility workers’ and physical simulation servers as ‘heavy workers’.

In any case, I do not think it is rather productive to place emphasis on semantics. My point is that I do not feel they are emphasising the programming language as a distinguishing thing (Unity is not a language after all) but rather as simple examples of different types of utilisation incurring different costs.

And if you feel nepotism is the most appropriate term to express how you feel, don’t let me stop you. I just feel it as being somewhat of an unfriendly term.


Then I would suggest they state “resource-based billing of workers” with Unity workers having a pre-set price given it’s known operating expenses as the statement of “We expect to have pricing for Unity3D workers and C++ workers from the start” leads to a different conclusion that it’s based on what SDK is linked and not on resources.

I feel that C++/C#/Javascript are regarded as ‘light utility workers’ and physical simulation servers as ‘heavy workers’.

I can run PhysX/Bullet/Box2D standalone via C++ easily enough to give me physical simulation outside of Unity or unreal if it’s going to affect my pricing that greatly and I’ll encourage others to do same.

emphasis on semantics


Let me know what you think, and if you have any questions.

I’m just doing as requested.

Are we being billed per worker + per worker hour? Seems like it based on the 2 bullet points of “worker-hours” and “Different types of workers will be priced differently,” -> Is this reflecting a different hourly rate per worker OR do we have a setup fee per worker before it’s billed hourly?


Is this reflecting a different hourly rate per worker OR do we have a setup fee per worker before it’s billed hourly?

Thanks for the feedback so far - we have no plans for a ‘set-up’ cost.

We’re planning on going into much more detail with concrete examples when we do the full release to people can get a good understanding for what costs will look like for various kinds of game. As rightly pointed out, the simulation fidelity of the world affects the number of workers you need to simulate it.

For a simple example, looking at the default setup for the Hello World Project, it runs two managed Unity workers to simulate the world. Assuming the deployment ran for an hour, and had 20 people logged in consistently, the hourly cost would be the (2 * unity worker hourly cost + 20 * connected client worker hourly cost).


More cost breakdown will benefit the discussion obviously but given what’s known currently I am still not a fan of the hourly cost being somehow tied into the SDK used and not tied into resource allocation/usage.

Any idea what the SLA will look like? guaranteed RTT/jitter? What’s to stop me spinning up C++ instance that overwhelm the host? Are resources limits put in place to prevent this? What about overall security? Are instances spun up as containerized VM’s?

If areas of my world are absent from players/active-entities for a certain time, can areas/zones be put to sleep? I know of no current MMO that does not benefit from dynamic reallocation in simulated world.

and the user + worker billing seems odd, I’m being billed on both sides? The resources used by the sever is directly tied to number connecting clients meaning my vertical scaling does not intersect the scaling of cost and I’ll reach a unattainable point quickly.

Resource based hourly billing seems the most logical still to me.


I noticed you want to allow people to test and play around with SpatialOS without needing to drop a large sum of money. I would suggest you look at making the development process of a game free. When someone wants to go live or get into high player testing that is when you start to bill. This way the indie community can still produce products without a huge overhead. As an indie developer it is hard enough to get a game out without paying large amounts of money up front. But if a game goes live and I am making money off of using SpatialOS I fully expect to pay Improbable for their awesome servers. Just a thought.


I would suggest you look at making the development process of a game free

We teamed up with Google to let us do just that!

You can read more about it here:

This will enable games studios to build, deploy and test games on SpatialOS up to the point of commercial release with significantly reduced, and in many cases completely eliminated, SpatialOS usage costs, including cloud computing fees. We hope this promotes experimentation, much earlier user testing, iteration of games, and an explosion of new ideas.

The deployments that you can run with the SpatialOS Alpha are currently free, but when the google program kicks off you’ll be able for credits to run much larger deployments without paying.


Hypothetical scenario: Typical MMORPG launch. 90% of users play at least 8 hours a day for the first month. Subscription fees for those users will not exceed $15 US per month. Hundreds of AIs, dozens of workers for a very large landscape, dungeons, overland encounters, and so forth.

If the game is even an extremely modest success and has hundreds players active, then it seems difficult to imagine that $15 per player will cover the fees that it appears SpatialOS will be charging for all that play time.


This is the exact type of information that i want to know.

If my target-market can’t afford to pay for these types of experiences, doesn’t make sense to begin development.

My game design document is just waiting for these sorts of details before i feel confident about making certain decisions, it might be that i have to heavily rethink the core concept due to the costs that would be associated to it and the amount people are willing to pay for such an experience, not to mention the payment model.

Anyways, great discussions so far. and @draconigra, thanks again for the information, good distinction to make that the world above certain sizes will REQUIRE the workers (given there are entities that are requiring the constant attention, unlike sleeping rigidbodies for instance…)


Hello there. I’m pitching in here more from the “technical” side but I think I can address a couple of questions with a (light) technical background of some of the decisions regarding billing (i.e read: I’m an engineer, not a marketing / sales / decision-making person).

I don’t know anything about the SLA but I can say that overwhelming the host would only damage your own deployment and will not affect somebody else’s world. So unless you want to ramp up your own bill you should not do it and somebody else will not be able to affect your game’s performance & billing. Or maybe I misunderstood your concern there?

This goes somewhat against the idea of persistence that is at the core of what SpatialOS is offering, the ability to have a part of your world that keeps evolving even when there’s no player around. No player will probably mean (depending on your game’s environment) that less work is required to simulate what remains and the worker load will decrease accordingly, allowing for some of the workers to shut down and saving the money. @draconigra makes some very good points about this in his replies and this is exactly what you are referring to with dynamic reallocation. From that perspective the worker-hours measure is reflective in a non-negligible part of the CPU/memory load. The load-balancing takes care of making sure that you don’t have two half-used workers running which would make you pay double.

In a high-level manner you can see it as follows:

  • Worker-hours represent the CPU/memory load that you are using for running the game’s simulation.
  • User-hours represent the network bandwidth that you are using for having people connect to your game.

From that perspective I don’t think that you can say that you are “billed on both sides” as both aspects deal with separate aspects that together amount to the total cost of running your game. The separation between the worker & client hours stems directly from the fact that:

  1. The number of workers has a high and “linear”’ impact on CPU / memory consumption as load-balancing takes care of having only the minimal amount of workers that keeps the deployment in a viable state and it has no effect on bandwidth costs (you don’t really pay for communications between the machines in the cloud).
  2. The number of users has a high and ‘linear"’ impact on the cloud <-> internet bandwidth that you are using and for which you do pay. There is certainly an impact on the CPU / memory usage but that is already covered by the worker-hours of the extra workers that need to be used for the simulation: this is not double-pay as the user-hours only reflect bandwidth and not CPU / memory, which I am sure will be more clear once more precise details about the pricing will be shared.


Also, as it was evoked in the first reactions: we are talking about Unity3D as it’s for the moment the only engine that we have official support for. As @mathieu’s initial message puts forward:

So don’t see the Unity/C++ as a “that’s it folks”. :slight_smile:


Excellent, that addresses a couple of questions! Is each worker provided its own compute instance from google then?

That is the point I was addressing so glad to hear load-balance will make intelligent decisions to spare resources.

I do believe a good deal of this will come down to what the rates end up being as to the ultimate decision if this is a fair pricing model for us but I do better understand the logic behind billable hour separation now.

It seems to work down to:
instances * (Compute Instance hourly Cost + improbable markup) = "workers hours"
Followup -> Do certain SDK (C++, Unity) end up spinning difference instances and that’s why there is a cost difference?

Outbound/Inbound traffic + improbable markup = “user hours”
(Google only charges for out of network traffic and only in or out depending on what is greater)

Different pricing for different SDK is still a issue to me, regardless of future intentions to “add more” unless it is the case above where instance size is actually dictating the cost, not the SDK. If that is the case I’d much prefer to be able to select the instance size in the configuration.



Not exactly, but - and don’t hold me to any exact promises here - different deployments do not share instances and multiple workers of the same deployment can run on a single instance / node.

The load of a worker that is dynamically load-balanced as described in the docs (here the C++ one) is determined by the user as he is the only one with the knowledge that allows to determine whether a given worker can take on more load or not. This might not always be equal to the current CPU / memory usage percentage with respect to the instance / node on which it is running (for example a worker can start to hit it’s processing latency limit that you want to guarantee without topping out CPU and / or memory usage). It’s this load that will determine whether another worker will be started or an existing one shut down.

Going out further along this line you can also see reasons why for example a C++ and a Unity worker might be priced differently. Again (I know I’m repeating this but its of significant importance), I’m giving an educated opinion from my own perspective as an engineer who happens to work at Improbable and not a business promise or an official statement.

With that disclaimer in mind: running an instance of a Unity worker has, by its nature, a larger CPU / memory overhead than running a “bare” C++ worker with only your own AI logic or other simulation service. This means that when considering average workers the Unity ones will use more CPU / memory than the C++ ones for a fully loaded worker, the important keyword being the term average.

That means that from Improbable’s perspective you can run either 3 average fully-loaded Unity workers or 6 average fully-loaded C++ workers on an instance (just example figures, not related to real data). As cloud costs are counted per instance and not by the exact CPU / memory usage of the instances you can see that it makes sense to bill the Unity worker-hours 2 times higher than C++ worker-hours for these example figures while still reflecting the exact used resources to the user with no double-billing or artificially ramping up prices.

I do fully acknowledge that this does not account for particular cases, or specific workers of one user’s specific project, but it does show that its about the average CPU / memory use of a certain type of worker. Its something at which, I think, @draconigra pointed slightly when saying:


Fantastic information listed there and think improbable could start rolling some of the answers into a FAQ and prune this thread slightly so future visitors get the information without as much back and forth.

While I do see the abstract concept behind the tiered billing per “worker-type” I still do not agree that is the ideal method of billing. As an example wouldn’t adapting a C++ wrapper to exec() the Unity client as a worker simply work around this artificial pricing while giving access to the same underlying resources or on the flip side, linking PhysX (Physics), Kythera (AI) as C++, resource heavy but low pricing tier.

I’d prefer to see a choice in the worker configuration that let’s me select resource allocation (Heavy, Med, Light) and be billed based on the decisions I make for my project. So if I have a heavy C++ worker, I can be assured I can get the resources (Multiple cores, memory) I need. You don’t always want to scale by adding more workers/hardware

Thanks again!


I’d prefer to see a choice in the worker configuration that let’s me select resource allocation (Heavy, Med, Light)

A very good point - for general-purpose utility workers (C++/CSharp/Java), the amount of resources you’d want to allocate to them would vary on their ability to make use of additional threads/memory you give them. As our model matures, I can highly imagine us having different ‘classes’ of general-purpose worker that are given different amounts of resources.

More specific workers, such as a game engine (Unity/Unreal/etc), they often have a much more specific threading model, and as a result we have a strong idea of what the maximum amount of resources they could ever effectively utilize - giving a Unity instance 16 cores would rarely make sense. You could happily spool up a Unity instance inside a C++ worker, it just wouldn’t be set up to be the best environment to run Unity in.

Self-hosting possibility