I am implementing an NPC to walk around a virtual space, specifically a cat. I have a series of short animation clips (3-5 seconds). My first instinct was just to choose a random animation when the last one ended, but I realised that it wouldn’t look realistic as it would change behaviour too often, even if the next animation is limited to physically contingent possibilities.
My intended solution is something like a behaviour tree (http://www.gamasutra.com/blogs/ChrisSimpson/20140717/221339/Behavior_trees_for_AI_How_they_work.php), where each animation has a weighted list of next animations. I.e. if the cat is walking, it has an 80% chance of continuing to walk, 20% of sitting down, 0% of sleeping. Basically using a markov model to get the appropriate next step.
However I have no idea if this is a good solution, nor do I know how I’m going to generate the mapping from current animation to potential next animation + probability. 30 animations * 30 next animations = 900 weightings. That’s a lot to calculate manually.
The cat will sometimes react if it hits an obstacle, but the brunt of the problem is choosing a realistic sequence of animations without picking them all in advance. In the tree there would also be some other inputs, like proximity to a person, location in room, time since last ate etc.
1
Generally you need to split your cats logic from its animations.
Now first you need to write cats logic. One good approach I have found is to split logic into layers.
Needs
Cat can have some state with some motives/needs (eat, sleep, etc.) that slowly grow over time and reduce when doing them (think Sims). You can pick current task that fulfills biggest need using fuzzy logic if you want.
Tasks
Now at each moment in time cat has a task (Find food, find bed to sleep, space to run around, etc. being Idle is a task too). These tasks tell cat where to want to go and what to want to do.
Actions
Now there’s 3rd layer – actions. Each goal has a queue of actions to do (stand up, walk to, crouch, eat, etc.). Each action is responsible for its execution. E.g. walk action should check for obstacles and deliver cat from point A to point B, possibly containing and executing sub-actions (jump over obstacles, crouch under furniture and etc.).
Animations
Now when the cat has needs, a task and an action, you can pick the right animation for that action. Knowing current and next animation you should be able to transition from one to another. E.g. if task says cat should lay after walking to its pillow, animations are queued – walk-stop-sit-lay.
Queuing of animations could be done effectively if you map them into a graph as nodes and connect the nodes between transitionable animations (e.g. walk to sit is possible, but jump to chew – not). Then you can queue animations from each one to any other using A* on this graph.
Example:
Cat has need to rest and to eat. Let “Rest” task find a place to rest, walk cat there, lay it down and rest. Let “Rest” task check for conditions every now and then, if surroundings become uncomfortable – let the task end. Check what cat wants now more, if it still wants to rest – repeat previous part. When cat is rested – choose new task.
I think what you are looking for is the finite state machine or FSM. In short it’s a way of change the behaviour of NPC:s according to their current state.
EDIT:
It’s like a behaviour tree but condensed down to some groups “states” that the NPC returns to. A behaviour tree allows much more flexibility of the behaviour but also needs more data for the weighting of probabilitys (a clever way to automate it is with tags, like scriptin suggests in his answer). When your using states you decide a certain set of actions and probabilities for those actions in the state. To actally change the current action can be biased with maybe 80% to keep the same action, if the action should be changed, the different probabilities is used to select the new action.
In your case the states could be (a little simplified):
- Sleepy: Sleep 80%, Sit 15%, Walk 5%
- Angry: Roar (does cats roar?) 40%, Hiss 40%, Run 20%
- Hungry: Eat 40%, Hunt 40%, Run 10%
- Playfull: Play 60%, Run 20%, Jump 10%
- Scarred: Hide 50%, Run 50%
Every state can have different probabilities to change state for example the angry or scarred state maybe doesn’t last long. The different states can also have different rules for what is legal (changing from “sleepy” to “playfull” can be illegal, but cats seem to not care about that). Different events can trigger the state to change.
Have a look around by searching the web for FSM and AI and you can see how it works. It may seem complicated when explaining it, but it’s really simple.
5
You can use tagging:
-
There may be movement tags like “laying”, “sitting”, “standing”, “walking”, and “running”. Then, you may eliminate unrealistic combinations of tags, e.g. “laying” -> “running” (there must be “standing up” in between).
-
Other tags may describe activities: “sleeping”, “eating”, “hunting”, etc. Again, “sleeping” -> “hunting” is impossible without intermediate states.
-
Since animations like “standing up” are transitional, it may be a good idea to have separate tags for a beginning and an end of each animation. For example, “standing up” may be a transition from “sitting” to “staying”, etc.
So, for each animation you could have few tags:
- Ones describing initial and final position/movement
- At least one describing an activity. Also, since activities also have transitions, you may also have initial and final tags here
With those, you can filter only possible combinations by setting a restrictions such as “A->B
is possible only if final_movement_tag(A) == initial_movement_tag(B)
“, which will result in a much smaller number. With those possible combination, you could do what you’ve described – add probabilities. Adding probabilities may be based on activity tags, since staying in a same activity is more probable than changing activities.
So, with tags you could possibly automate the creation of all the transitions in your FSM/behavior tree, and tune them later if you’re not happy with some combinations.
If you like to keep the rich possibilities of behaviour trees, you can add a new kind of composite selector node: the Markov selector node.
You would have to implement the Markov selector node yourself. It will select one of its child nodes at random, depending on the (child) node that previously succeeded (or failed).