Improbable Icon

General Question about the flexibility of C++ works and possible sensors


#1

I have been looking through the pirates C++ worker tutorial. Its great that you can define intelligent agent behavior through external code. I was wondering just how flexible this feature is? For instance could each user write there own C++ code for agent behavior that could then persist in the world for either an individual agent per client or a fleet of agents per client. Could this C++ code then take advantage of specific external libraries like opencv or mrpt? This would be great for testing autonomous systems in a world were each users agent’s can interact with each others with different behaviors and then measure which ones perform the best. The other question is what could be possible to create for sensors on these agents, this is probably just limited to seeing other entities and their components. Since Unity is only acting as a visualizer in this case you probably couldn’t use any camera input or ray tracing data to mimic lidar affects. Unless the agents could reference some Unity version of the world and generate measurements and upload the input data to their perspective workers. Just wondering what is possible here, thanks for any thoughts and feedback.


#2

Your rich AI worker could communicate through schema to the UnityWorkers - requesting sensor data like line of sight checks. You could have a component on the entity that is writable by the physics engine that gets used to drop updates into.

Running user written code server side has it’s own grab-bag of problems - there’s certainly nothing stopping you from allowing users to use a node-based graphical interface to define behaviour trees that then run on the server and are persistent. I wouldn’t do it using something like LUA or PYTHON without some serious security infrastructure (last thing you want is users able to execute arbitrary code on your server :smiley: )

Your c++ workers are free to link to which ever libraries you want ( Bullet physics, AI libraries etc )
:support:


#3

Thanks Callumb for the information, that sounds very good. It seems like it wouldn’t be tough far of a stretch then to create a demo where users design their own path finding bots to complete open tasks in a world. These tasks could be either picking up pedestrians or packages and delivering them to different areas in the world. That makes sense that you can have line of sight and other data from Unity workers, and then I guess just tie together the communication for Unity workers and the user’s created C++ works.