This scene demonstrates how one might analyze side-by-side the performance of two approaches for agent cover behaviour: scripted vs. Machine Learning (ML).
This scene demonstrates how one might analyze side-by-side the performance of two approaches for agent cover behaviour: scripted vs. Machine Learning (ML).
The scene consists of two Training Area prefabs divided by agent type:
Agent Type: Scripted
Agent Type: ML – Reinforcement Trained
During runtime, each respective agent (blue) immediately proceeds with the goal of reaching the cover point closest to its current position that blocks the enemy’s (red) line-of-sight. The board resets automatically when both agents reach the goal, or when the 15-second time limit expires.
Click the Reset button at any time to clear the board in the event an agent will clearly not reach the goal in time.
Assets/Ride/Examples/Behaviours/AgentBehaviours/ExampleBehaviourComparison.unity
This specific comparison scene requires the ExampleBehaviourComparison script and two instances of the TrainingArea_TakeCover prefab, which contains the TrainingArea_TakeCover script, floor/walls, cover, goals, Agent (script/ML), Enemy game objects.
First, add this script to an object in your scene to set the Training Area type automatically with the TrainingArea_TakeCover prefabs.
Next, create empty game objects named “Scripted” and “ML” and then import the TrainingArea_TakeCover prefab into your scene. Move this prefab under “Scripted” and rename as “ScriptedTrainingArea”. Instance this prefab again, moving under “ML” and renaming as “MLTrainingArea”.
The example scene also includes Canvas and TrainignMenu prefabs to create menu panels and buttons.
Create a prefab variant of TrainingArea_TakeCover to begin constructing your own training environments.
Prerequisite, if adding your own mlAgents for a similar comparison structure, these agents must be trained prior. See Training TakeCover ML for more information.
The mlAgent in the example scene is modelling cover behaviour by inference. In this case, the training scene was linked to an external python application over Unity TCP containing user-defined observations created in an academy for which the unit “brains” undergo training.
The Barracuda Neural Net inference library “brain” output file imported into Unity and used for the example scene: Assets/Ride/Examples/Behaviours/MLBehaviours/TakeCover.nn