Improbable Icon

Different AI Behaviour Branches?

general

#1

When you guys picture a humanoid walking around in a game world, what are a few things you think would give the semblance of intelligence? I figure some basic Sims/Rimworld style interactions between characters, that can effect a variable for ‘happiness’ is a good one. Obviously food and water. I was thinking of having pre-set water-wells for locations of ‘towns’ that NPCs can start just by having their need for water be so low that they know it’d be easier to just start a new town.


#2

Interesting question!

I think this is an area where you could really explore the limits of what is possible in gaming. For example, you could argue that learned behaviour is a key feature of intelligence, so it would be interesting to see what you could come up with whereby your humanoids either learned from each other (e.g. about their interaction other players - who was hostile, who not?), or learned about the world itself (where water/resources/food was located).

The challenge would seem to be how you get ‘knowledge’ to pass between characters, and then what entity behaviours alter as a result. And what does this do for your gameplay, in a way that is both fun, and affects the world in interesting and unscripted ways that increase player interest.

@callumb any thoughts on how to approach this from a coding/game design perspective?


#3

Though I am not Callum I would also like to add to the conversation :slight_smile:

AI, as you describe, is awesomely interesting but also a rather complex subject. There are legions of articles on how to use behaviour trees or Finite State Machines but if you want an adaptive system I am of the opinion that these become too complex.

Instead, I can recommend looking at AI based on Utility Theory. This type of AI is driven by needs and desires and makes a decision on whichever action is more appropriate for the current situation. So instead of walking through a tree of decisions your character could observe its current situation, add up a series of ‘weights’ that would contribute to an outcome and act on the strongest outcome.

I can recommend looking at the theory as shown by Apex Tools’ Utility AI and the article by its founder on Gamasutra.

Another interesting article that is on my to read list is how Shadow Of Mordor implemented its Nemesis system to give life to AI characters instead of them being a mere NPC #4538:

And finally, I can also recommend looking into how Left 4 Dead used a global AI, which they dubbed ‘The Director’. There are several articles on their AI system, of one is this one: http://www.gamasutra.com/blogs/BenServiss/20130207/186193/The_Discomfort_Zone_The_Hidden_Potential_of_Valves_AI_Director.php

But most of all: be creative! When you think out of the box you can give your AI many forms. For Elrakis I have planned a series of AIs, from small to global, to perform various duties in various ways to determine their actions. I plan to write up on Elrakis’ AI at a later stage when I have done the bulk of the implementation so I’m not going to describe too much about it :wink:


#4

@andrew.roper uses a cool plugin for unity to do his AI:
https://www.assetstore.unity3d.com/en/#!/content/10912
Don’t get me wrong: I’m not saying let the tool you use define your creativity ( because otherwhys, we’d all only use hammers ) but certainly adding some constraints can help.
Check out the tool and how it works…perhaps it will spark some ideas for quick wins?
Let us know how it goes!
Cal


#5

I had missed that Behave2 supports Utility Theory AI. I need to dive into that asset again! Thanks!


#6

@GrinningMuffin I hope that gives you some angles to think about?

Thanks from me to @draconigra and @callumb - I am a non-technical character, so I have learned some interesting things from your posts, especially about the interplay between real-world social structures and motivations (my area), and the raw mechanics of translating these (often quite subjective/philosophical/ethereal) concepts into coded behaviour.

It strikes me as very interesting to consider the personal motivations or mindset/world view in any given developer and the extent to which this affects the AI or decision-making model(s) they choose - or exclude - for their game. Some sort of overview of the options, and correlations would be fascinating: perhaps something for our resident expert @zara!


#7

If you find it interesting how real-world social structures and motivations may influence NPCs and other personal AI forms. I can definitely recommend people to look at Maslow’s Hierarchy of Needs or Murray’s System of Needs. These concepts from the field of psychology can provide a framework for creating deep and complex interactions between AI’s and to players


#8

I agree - Using Maslow’s Hierarchy of Needs as a foundation for social structures is perhaps less risky than using Murray’s System of Needs (in a real world AGI context, that is). In a game specifically, I think you would get more intricate interactions than if you based it on any other type of social structure due to the nature of the relationships that would arise out of having dynamic behaviours interplaying, such as the need for safety, esteem and self-actualisation. It would be pretty interesting to have a game where the overarching goal of an AI was purely hedonistic though.


#9

I was thinking more of a survival type needs, where they purely want water/food and if anything prevents that they’ll simply move to the next town.

Every NPC and Player I figured would be unkillable unless specifically executed while down, that way NPCs can’t just lose their progress in learning the locations of towns etc


#10

On intelligence focused more on survival, and visible behaviours, I’d suggest the following:

Basic

  • is seen to go and try and get things it needs, e.g. water, when it is important only: i.e. no forward planning or stockpiling.
  • takes direct routes, with little deviation or consideration of other factors (e.g. threats, terrain).
  • responds to nearest interactions only, usually as an individual.

More advanced

  • Some understanding of competition, so doesn’t always go to nearest source.
  • Happiness is partly a function of isolation/community (e.g. distance from certain types of other entity), promoting different densities in different entity types.
  • Having to wait, or discovering something has run out, evolves the behaviours and limits for future (e.g. new sources sought before one runs out, esp. if the world can broadcast resource levels for a water supply etc).

My question to you - to help us advise - would be: what are these intelligence characteristics/character interactions seeking to add for the player experience, that makes your idea more fun/more meaningful than other games?