Behaviour Comparison

Purpose

This scene demonstrates how one might analyze side-by-side the performance of two approaches for agent cover behaviour: scripted vs. Machine Learning (ML). 

How to Use

The scene consists of two Training Area prefabs divided by agent type:

Agent Type: Scripted

  • Logic: Use cover and enemy positions to determine the best* place to take cover
  • Best = closest point to blue that blocks red’s line of sight

Agent Type: ML – Reinforcement Trained

  • Observations: Player, Cover, and Enemy positions
  • Actions: Move up / down / left / right; Rotate left / right
  • Rewards: Standing in cover
  • Punishments: Small moving penalty

During runtime, each respective agent (blue) immediately proceeds with the goal of reaching the cover point closest to its current position that blocks the enemy’s (red) line-of-sight. The board resets automatically when both agents reach the goal, or when the 15-second time limit expires. 

Click the Reset button at any time to clear the board in the event an agent will clearly not reach the goal in time.

Scene Location & Name

Assets/Ride/Examples/Behaviours/AgentBehaviours/ExampleBehaviourComparison.unity

Setup Requirements 

This specific comparison scene requires the ExampleBehaviourComparison script and two instances of the TrainingArea_TakeCover prefab, which contains the TrainingArea_TakeCover script, floor/walls, cover, goals, Agent (script/ML), Enemy game objects.

ExampleBehaviourComparison Script

First, add this script to an object in your scene to set the Training Area type automatically with the TrainingArea_TakeCover prefabs.

TrainingArea_TakeCover Prefab and TrainingArea_TakeCover Script

Next, create empty game objects named “Scripted” and “ML” and then import the TrainingArea_TakeCover prefab into your scene. Move this prefab under “Scripted” and rename as “ScriptedTrainingArea”. Instance this prefab again, moving under “ML” and renaming as “MLTrainingArea”.

The example scene also includes Canvas and TrainignMenu prefabs to create menu panels and buttons.

Customizing and Extending Scene  

Create a prefab variant of TrainingArea_TakeCover to begin constructing your own training environments. 

Prerequisite, if adding your own mlAgents for a similar comparison structure, these agents must be trained prior. See Training TakeCover ML for more information.

The mlAgent in the example scene is modelling cover behaviour by inference. In this case, the training scene was linked to an external python application over Unity TCP containing user-defined observations created in an academy for which the unit “brains” undergo training.

The Barracuda Neural Net inference library “brain” output file imported into Unity and used for the example scene: Assets/Ride/Examples/Behaviours/MLBehaviours/TakeCover.nn

Unity Machine Learning Agents Information

Unity ML-Agents Tools

Unity ML-Agents How-To

Unity ML-Agents Intro – Blog