diff --git a/docs/Arena-Object-Definitions.md b/docs/Arena-Object-Definitions.md index 4123c4966..5857608cf 100644 --- a/docs/Arena-Object-Definitions.md +++ b/docs/Arena-Object-Definitions.md @@ -1,6 +1,7 @@ # Animal-AI Environment Objects #### Table of Contents + 1. [Introduction](#introduction) 2. [Unity Objects - What are they?](#unity-objects---what-are-they) 3. [The Arena](#the-arena) @@ -15,18 +16,16 @@ The Animal-AI environment comprises various objects categorized into _immovable_, _movable_, _rewards_, and _other/unique_ types. These objects can be configured in numerous ways to create diverse tasks. Each object's name, default characteristics, and configurable ranges are detailed below. All objects can rotate 360 degrees. Unity uses a left-handed coordinate system with `y` as the vertical axis, and `x` and `z` axes representing horizontal and depth dimensions, respectively. - ## Unity Objects - What are they? Briefly, Unity game objects, commonly referred to as *GameObjects*, are the fundamental components in the Unity Engine, serving as containers for all other components or functionalities within a Unity scene. These objects can represent characters, props, scenery, cameras, lights, and more. Each GameObject can be equipped with various components such as scripts, renderers, colliders, or custom components, defining their behavior and interaction within the game world. *Prefabs* in Unity are essentially templates created from GameObjects; they allow developers to create, configure, and store a GameObject complete with its components and properties. Once a Prefab is created, it can be reused multiple times across the scene or even across different projects, ensuring consistency and efficiency in game development by allowing changes to be made to multiple instances simultaneously. - Most objects in AAI share a handful of fundamental parameters governing their size, position, and other properties. Values for these parameters can be defined in YAML ([see here](/docs/Background-YAML.md)). Common parameters are: -- `name`: the name of the object you want to spawn. -- `positions`: a list of `Vector3` positions within the arena where you want to spawn items, if the list is empty the position will be sampled randomly in the arena. Any position dimension set to -1 will spawn randomly. -- `sizes`: a list of `Vector3` sizes, if the list is empty the size will be sampled randomly (within preset bounds for that particular object). You can set any size to -1 to spawn randomly along that dimension only. -- `rotations`: a list of `float` in the range `[0,360]`, if the list is empty the rotation is sampled randomly. -- `colors`: a list of `RGB` values (integers in the range `[0,255]`), if the list is empty the color is sampled randomly. Note that not all objects can have their colour changed and for those (e.g. transparent objects) this value will be ignored. +* `name`: the name of the object you want to spawn. +* `positions`: a list of `Vector3` positions within the arena where you want to spawn items, if the list is empty the position will be sampled randomly in the arena. Any position dimension set to -1 will spawn randomly. +* `sizes`: a list of `Vector3` sizes, if the list is empty the size will be sampled randomly (within preset bounds for that particular object). You can set any size to -1 to spawn randomly along that dimension only. +* `rotations`: a list of `float` in the range `[0,360]`, if the list is empty the rotation is sampled randomly. +* `colors`: a list of `RGB` values (integers in the range `[0,255]`), if the list is empty the color is sampled randomly. Note that not all objects can have their colour changed and for those (e.g. transparent objects) this value will be ignored. Any of these fields can be omitted in the configuration files, in which case the omitted fields are automatically randomized. Any Vector3 that contains a -1 for any of its dimensions will spawn that dimension randomly. This can be used to spawn, for example, multiple walls of a set width and height but random lengths. @@ -36,14 +35,14 @@ Any of these fields can be omitted in the configuration files, in which case the

-A single arena is as shown above, it comes with a single agent (spherical animal, [see below](#the-agent)), a floor and four walls. It is a square of size 40x40, the origin (the bottom-left corner) of the arena is `(0,0)`. You can provide coordinates for objects in the range `[0,40]x[0,40]` as floats. +A single arena is as shown above, it comes with a single agent (spherical animal, [see below](#the-agent)), a floor and four walls. It is a square of size 40x40, the origin (the bottom-left corner) of the arena is `(0,0)` . You can provide coordinates for objects in the range `[0,40]x[0,40]` as floats. -Note that in Unity the **y** axis is the vertical axis. In the above picture with the agent on the ground in the center of the environment its coordinates are `(20, 0, 20)`. +Note that in Unity the **y** axis is the vertical axis. In the above picture with the agent on the ground in the center of the environment its coordinates are `(20, 0, 20)` . For each arena you can provide the following parameters and a list of objects to spawn: -- `t` an `int`, the length of an episode which can change from one episode to the other. A value of `0` means that the episode will not terminate until a reward has been collected (setting `t=0` and having no reward will lead to an infinite episode). This value is converted into a decay rate for the health of the agent. A `t` of 100 means that the agent's health will decay to 0, and the episode will end, after 100 time steps. -- `pass_mark` an `int`, the reward threshold that should constitute a ‘pass’ in the environment. Leaving this parameter undefined leads to the default value of 0, whereby any reward value obtained by the Agent results in a pass. This parameter also determines the notifications that players receive at the end of an episode. If used, this parameter should be defined with consideration to the reward size that can feasibly be obtained by the agent in each configuration file. -- `blackouts` +* `t` an `int`, the length of an episode which can change from one episode to the other. A value of `0` means that the episode will not terminate until a reward has been collected (setting `t=0` and having no reward will lead to an infinite episode). This value is converted into a decay rate for the health of the agent. A `t` of 100 means that the agent's health will decay to 0, and the episode will end, after 100 time steps. +* `pass_mark` an `int`, the reward threshold that should constitute a ‘pass’ in the environment. Leaving this parameter undefined leads to the default value of 0, whereby any reward value obtained by the Agent results in a pass. This parameter also determines the notifications that players receive at the end of an episode. If used, this parameter should be defined with consideration to the reward size that can feasibly be obtained by the agent in each configuration file. +* `blackouts` ### Blackouts @@ -51,27 +50,29 @@ Blackouts are parameters you can pass to each arena, which define between which on or off. If omitted, this parameter automatically sets to have lights on for the entire episode. You can otherwise pass two types of arguments for this parameter: -- passing a list of frames `[5,10,15,20,25]` will start with the lights on, switch them off from frames 5 to 9 included, +* passing a list of frames `[5,10,15,20,25]` will start with the lights on, switch them off from frames 5 to 9 included, then back on from 15 to 19 included etc... -- passing a single negative argument `[-20]` will automatically switch lights on and off every 20 frames. +* passing a single negative argument `[-20]` will automatically switch lights on and off every 20 frames. -**Note**: for infinite episodes (where `t=0`), the first point above would leave the light off after frame `25` while the second point would keep switching the lights every `20` frames indefinitely. +**Note**: for infinite episodes (where `t=0` ), the first point above would leave the light off after frame `25` while the second point would keep switching the lights every `20` frames indefinitely. ## The Agent The agent can be placed anywhere in the arena with any rotation. It has a fixed size and a fixed set of skins. -- **Name**: `Agent` -- **Size**: `(1,1,1)` (not changeable) -- **Skins** (`skins`): `"hedgehog"`, `"panda"`, `"pig"`, `"random"` +* **Name**: `Agent` +* **Size**: `(1,1,1)` (not changeable) +* **Skins** (`skins`): `"hedgehog"`, `"panda"`, `"pig"`, `"random"` -Notes: The agent can be frozefor a specified number of frames at the start of an episode. There is no reward decrement during the frozen period. This can be set with an integer value passed to the `frozenAgentDelays` parameter (defaults to `0`). +Notes: The agent can be frozefor a specified number of frames at the start of an episode. There is no reward decrement during the frozen period. This can be set with an integer value passed to the `frozenAgentDelays` parameter (defaults to `0` ). + +

The Agent as a hedgehog

The Agent as a panda

The Agent as a pig

@@ -80,267 +81,293 @@ Notes: The agent can be frozefor a specified number of frames at the start of an _Immovable_ objects are fixed in place and cannot be moved. The outer walls of the arena are also immovable and are permanently fixed in place to prevent the player/agent from escaping the arena. ### Wall + -- **Name**: `Wall` -- **Size Range**: `(0.1,0.1,0.1)-(40,10,40)` -- **Color**: RGB range `(0,0,0)-(255,255,255)` +* **Name**: `Wall` +* **Size Range**: `(0.1,0.1,0.1)-(40,10,40)` +* **Color**: RGB range `(0,0,0)-(255,255,255)` ### Transparent Wall + -- **Name**: `WallTransparent` -- **Size Range**: `(0.1,0.1,0.1)-(40,10,40)` -- **Color**: Not changeable +* **Name**: `WallTransparent` +* **Size Range**: `(0.1,0.1,0.1)-(40,10,40)` +* **Color**: Not changeable ### Ramp + -- **Name**: `Ramp` -- **Size Range**: `(0.5,0.1,0.5)-(40,10,40)` -- **Color**: RGB range `(0,0,0)-(255,255,255)` +* **Name**: `Ramp` +* **Size Range**: `(0.5,0.1,0.5)-(40,10,40)` +* **Color**: RGB range `(0,0,0)-(255,255,255)` ### Tunnel + -- **Name**: `CylinderTunnel` -- **Size Range**: `(2.5,2.5,2.5)-(10,10,10)` -- **Color**: RGB range `(0,0,0)-(255,255,255)` +* **Name**: `CylinderTunnel` +* **Size Range**: `(2.5,2.5,2.5)-(10,10,10)` +* **Color**: RGB range `(0,0,0)-(255,255,255)` ### Transparent Tunnel + -- **Name**: `CylinderTunnelTransparent` -- **Size Range**: `(2.5,2.5,2.5)-(10,10,10)` -- **Color**: Not changeable +* **Name**: `CylinderTunnelTransparent` +* **Size Range**: `(2.5,2.5,2.5)-(10,10,10)` +* **Color**: Not changeable ## Movable Objects _Movable_ objects can be easily moved by the agent or other objects. These objects can be pushed by the player/agent as the physics engine is enabled for these objects directly. Note that these objects have aliases (alternative names) for backwards compatibility with previous versions of AAI. ### Light Cardboard Block + -- **Name**: `LightBlock` -- **Size Range**: `(0.5,0.5,0.5)-(10,10,10)` -- **Color**: Not changeable -- **Alias**: `CardBox1` +* **Name**: `LightBlock` +* **Size Range**: `(0.5,0.5,0.5)-(10,10,10)` +* **Color**: Not changeable +* **Alias**: `CardBox1` ### Heavy Cardboard Block + -- **Name**: `HeavyBlock` -- **Size Range**: `(0.5,0.5,0.5)-(10,10,10)` -- **Color**: Not changeable -- **Alias**: `CardBox2` +* **Name**: `HeavyBlock` +* **Size Range**: `(0.5,0.5,0.5)-(10,10,10)` +* **Color**: Not changeable +* **Alias**: `CardBox2` ### U-shaped Block + -- **Name**: `UBlock` -- **Size Range**: `(1,0.3,3)-(5,2,20)` -- **Color**: Not changeable -- **Alias**: `UObject` +* **Name**: `UBlock` +* **Size Range**: `(1,0.3,3)-(5,2,20)` +* **Color**: Not changeable +* **Alias**: `UObject` ### L-shaped Block + -- **Name**: `LBlock` -- **Size Range**: `(1,0.3,3)-(5,2,20)` -- **Color**: Not changeable -- **Alias**: `LObject` +* **Name**: `LBlock` +* **Size Range**: `(1,0.3,3)-(5,2,20)` +* **Color**: Not changeable +* **Alias**: `LObject` ### J-shaped Block + -- **Name**: `JBlock` -- **Size Range**: `(1,0.3,3)-(5,2,20)` -- **Color**: Not changeable -- **Alias**: `LObject2` +* **Name**: `JBlock` +* **Size Range**: `(1,0.3,3)-(5,2,20)` +* **Color**: Not changeable +* **Alias**: `LObject2` ## Valenced Objects Valenced objects increase or decrease the agent's reward when the agent touches them. Some are stationary and some have an initial velocity at the start of an episode. Note that some of these objects have aliases (alternative names) for backwards compatibility with previous versions of AAI. ### Stationary Episode-Ending Positive Goal + -- **Name**: `GoodGoal` -- **Size Range**: `0.5-5` -- **Color**: Not changeable -- **Valence**: Positive, proportional to size +* **Name**: `GoodGoal` +* **Size Range**: `0.5-5` +* **Color**: Not changeable +* **Valence**: Positive, proportional to size ### Moving Episode-Ending Positive Goal + -- **Name**: `GoodGoalBounce` -- **Size Range**: `0.5-5` -- **Color**: Not changeable -- **Valence**: Positive, proportional to size +* **Name**: `GoodGoalBounce` +* **Size Range**: `0.5-5` +* **Color**: Not changeable +* **Valence**: Positive, proportional to size Notes: The `rotations` parameter sets the direction of motion. ### Stationary Episode-Ending Negative Goal + -- **Name**: `BadGoal` -- **Size Range**: `0.5-5` -- **Color**: Not changeable -- **Valence**: Negative, proportional to size +* **Name**: `BadGoal` +* **Size Range**: `0.5-5` +* **Color**: Not changeable +* **Valence**: Negative, proportional to size ### Moving Episode-Ending Negative Goal + -- **Name**: `BadGoalBounce` -- **Size Range**: `0.5-5` -- **Color**: Not changeable -- **Valence**: Negative, proportional to size +* **Name**: `BadGoalBounce` +* **Size Range**: `0.5-5` +* **Color**: Not changeable +* **Valence**: Negative, proportional to size Notes: The `rotations` parameter sets the direction of motion. ### Stationary Non-Episode-Ending Positive Goal + -- **Name**: `GoodGoalMulti` -- **Size Range**: `0.5-5` -- **Color**: Not changeable -- **Valence**: Positive, proportional to size +* **Name**: `GoodGoalMulti` +* **Size Range**: `0.5-5` +* **Color**: Not changeable +* **Valence**: Positive, proportional to size ### Moving Non-Episode-Ending Positive Goal + -- **Name**: `GoodGoalMultiBounce` -- **Size Range**: `0.5-5` -- **Color**: Not changeable -- **Valence**: Positive, proportional to size +* **Name**: `GoodGoalMultiBounce` +* **Size Range**: `0.5-5` +* **Color**: Not changeable +* **Valence**: Positive, proportional to size Notes: The `rotations` parameter sets the direction of motion. ### Non-Episode-Ending Ripen Goal + -- **Name**: `RipenGoal` -- **Valence Range**: `0-5` -- **Size**: automatically sets to final reward value -- **Color**: Not changeable -- **Ripen Onset Delay Range (frames)** (`delays`): `0-Inf` (default `150`) -- **Ripen Rate (frames)** (`changeRates`): `0.001-Inf` (default `0.005`) -- **Alias**: `AntiDecayGoal` +* **Name**: `RipenGoal` +* **Valence Range**: `0-5` +* **Size**: automatically sets to final reward value +* **Color**: Not changeable +* **Ripen Onset Delay Range (frames)** (`delays`): `0-Inf` (default `150`) +* **Ripen Rate (frames)** (`changeRates`): `0.001-Inf` (default `0.005`) +* **Alias**: `AntiDecayGoal` Notes: Colour changes (from purple to grey) and a radial-timer fills over time during ripening process. Initial valence can be set with a float passed to the `initialValues` parameter, and valence can be set with a float passed to the `finalValues` parameter. ### Non-Episode-Ending Decay Goal + -- **Name**: `DecayGoal` -- **Valence Range**: `0-5` -- **Size**: automatically sets to final reward value -- **Color**: Not changeable -- **Decay Onset Delay Range (frames)** (`delays`): `0-Inf` (default `150`) -- **Decay Rate (frames)** (`changeRates`): `-0.001-Inf` (default `-0.005`, automatically converts to negative values if positive provided) +* **Name**: `DecayGoal` +* **Valence Range**: `0-5` +* **Size**: automatically sets to final reward value +* **Color**: Not changeable +* **Decay Onset Delay Range (frames)** (`delays`): `0-Inf` (default `150`) +* **Decay Rate (frames)** (`changeRates`): `-0.001-Inf` (default `-0.005`, automatically converts to negative values if positive provided) Notes: Colour changes (from purple to grey) and a radial-timer depletes over time during decay process. Initial valence can be set with a float passed to the `initialValues` parameter, and valence can be set with a float passed to the `finalValues` parameter. ### Episode-Ending Grow Goal + -- **Name**: `GrowGoal` -- **Size Range**: `0-5` -- **Valence Change Rate** (`changeRates`): `0.001-Inf` (default `0.005`) -- **Valence**: Positive, proportional to size -- **Color**: Not changeable -- **Growth Onset Delay Range (frames)** (`delays`): `0-Inf` (default `0`) +* **Name**: `GrowGoal` +* **Size Range**: `0-5` +* **Valence Change Rate** (`changeRates`): `0.001-Inf` (default `0.005`) +* **Valence**: Positive, proportional to size +* **Color**: Not changeable +* **Growth Onset Delay Range (frames)** (`delays`): `0-Inf` (default `0`) -Notes: Growth halts if the the goal is trapped between/underneath other objects. Maximum size is `5`. Initial valence can be set with a float passed to the `initialValues` parameter, and valence can be set with a float passed to the `finalValues` parameter. +Notes: Growth halts if the the goal is trapped between/underneath other objects. Maximum size is `5` . Initial valence can be set with a float passed to the `initialValues` parameter, and valence can be set with a float passed to the `finalValues` parameter. ### Episode-Ending Shrink Goal + -- **Name**: `ShrinkGoal` -- **Size Range**: `0-5` -- **Valence Change Rate** (`changeRates`): `0.001-Inf` (default `0.005`) -- **Valence**: Positive, proportional to size -- **Color**: Not changeable -- **Growth Onset Delay Range (frames)** (`delays`): `0-Inf` (default `0`) +* **Name**: `ShrinkGoal` +* **Size Range**: `0-5` +* **Valence Change Rate** (`changeRates`): `0.001-Inf` (default `0.005`) +* **Valence**: Positive, proportional to size +* **Color**: Not changeable +* **Growth Onset Delay Range (frames)** (`delays`): `0-Inf` (default `0`) -Notes: Maximum size is `5`. Initial valence can be set with a float passed to the `initialValues` parameter, and valence can be set with a float passed to the `finalValues` parameter. +Notes: Maximum size is `5` . Initial valence can be set with a float passed to the `initialValues` parameter, and valence can be set with a float passed to the `finalValues` parameter. ### Episode-Ending DeathZone + -- **Name**: `DeathZone` -- **Size Range**: `(1,0.5,1)-(40,10,40)` -- **Valence**: `-1` -- **Color**: Not changeable +* **Name**: `DeathZone` +* **Size Range**: `(1,0.5,1)-(40,10,40)` +* **Valence**: `-1` +* **Color**: Not changeable ### Non-Episode-Ending HotZone + -- **Name**: `HotZone` -- **Valence**: `min(-10/t, -0.00001)` for `t > 0`, `-0.00001` otherwise, where `t` is the number of steps in the episode -- **Color**: Not changeable +* **Name**: `HotZone` +* **Valence**: `min(-10/t, -0.00001)` for `t > 0`, `-0.00001` otherwise, where `t` is the number of steps in the episode +* **Color**: Not changeable Notes: When the agent enters the hot zone, reward decrement is accelerated by a factor of 10 compared to time alone. If a `DeathZone` and a `HotZone` overlap, `DeathZone` prevails. - ## Dispensers These objects dispense valenced objects. They are immovable. ### SpawnerTree + -- **Name**: `SpawnerTree` -- **Size**: Fixed (`5.19 x 5.95 x 5.02`) -- **Spawned Goal Size Range**: `0.2-3` -- **Number of goals to spawn** (`spawnCounts`): `0-Inf` (leave blank or set to `-1` to spawn infinitely) -- **Color**: Not changeable -- **Growth Onset Delay Range (frames)** (`delays`): `0-Inf` (default `0`) +* **Name**: `SpawnerTree` +* **Size**: Fixed (`5.19 x 5.95 x 5.02`) +* **Spawned Goal Size Range**: `0.2-3` +* **Number of goals to spawn** (`spawnCounts`): `0-Inf` (leave blank or set to `-1` to spawn infinitely) +* **Color**: Not changeable +* **Growth Onset Delay Range (frames)** (`delays`): `0-Inf` (default `0`) -Notes: The tree spawns `GoodGoalMulti`. They grow on the trees before dropping to the floor once they have reached their final size. The starting size of the goals can be set with the `initialValues` parameter (default: `0.2`) and the final size with the `finalValues` parameter (default: `1.0`). The valence of the goals is proportional to their size. The number of seconds it takes to 'grow' the goals on the tree (relative to the timescale of the environment) can be set with the `ripenTimes` parameter. The number of seconds between spawnings (relative to the timescale of the environment) can be set with the `timesBetweenSpawns` parameter (default: 4.0). +Notes: The tree spawns `GoodGoalMulti` . They grow on the trees before dropping to the floor once they have reached their final size. The starting size of the goals can be set with the `initialValues` parameter (default: `0.2` ) and the final size with the `finalValues` parameter (default: `1.0` ). The valence of the goals is proportional to their size. The number of seconds it takes to 'grow' the goals on the tree (relative to the timescale of the environment) can be set with the `ripenTimes` parameter. The number of seconds between spawnings (relative to the timescale of the environment) can be set with the `timesBetweenSpawns` parameter (default: 4.0). ### SpawnerDispenserTall + -- **Name**: `SpawnerDispenserTall` -- **Size**: Fixed (`1.67 x 4.46 x 1.67`) -- **Spawned Goal Size Range**: `0.2-1` -- **Number of goals to spawn** (`spawnCounts`): `0-Inf` (leave blank or set to `-1` to spawn infinitely) -- **Color**: RGB range `(0,0,0)-(255,255,255)` -- **Growth Onset Delay Range (frames)** (`delays`): `0-Inf` (default `0`) -- **Alias**: `SpawnerDispenser` +* **Name**: `SpawnerDispenserTall` +* **Size**: Fixed (`1.67 x 4.46 x 1.67`) +* **Spawned Goal Size Range**: `0.2-1` +* **Number of goals to spawn** (`spawnCounts`): `0-Inf` (leave blank or set to `-1` to spawn infinitely) +* **Color**: RGB range `(0,0,0)-(255,255,255)` +* **Growth Onset Delay Range (frames)** (`delays`): `0-Inf` (default `0`) +* **Alias**: `SpawnerDispenser` -Notes: The dispenser spawns `GoodGoalMulti`. The valence of the goals is proportional to their size. The number of seconds between spawnings (relative to the timescale of the environment) can be set with the `timesBetweenSpawns` parameter (default: 1.5). The object has a door that can be animated to open and close. The number of seconds before the door opens can be set with the `doorDelays` parameter (default: `10.0`), and the number of seconds the door remains open for can be ste with the `timesBetweenDoorOpens` parameter (default: `-1`, if `< 0` then, once opened, the door stays open permanently). +Notes: The dispenser spawns `GoodGoalMulti` . The valence of the goals is proportional to their size. The number of seconds between spawnings (relative to the timescale of the environment) can be set with the `timesBetweenSpawns` parameter (default: 1.5). The object has a door that can be animated to open and close. The number of seconds before the door opens can be set with the `doorDelays` parameter (default: `10.0` ), and the number of seconds the door remains open for can be ste with the `timesBetweenDoorOpens` parameter (default: `-1` , if `< 0` then, once opened, the door stays open permanently). ### SpawnerContainerShort + -- **Name**: `SpawnerDispenserShort` -- **Size**: Fixed (`1.67 x 1.67 x 1.67`) -- **Spawned Goal Size Range**: `0.2-1` -- **Number of goals to spawn** (`spawnCounts`): `0-Inf` (leave blank or set to `-1` to spawn infinitely) -- **Color**: RGB range `(0,0,0)-(255,255,255)` -- **Growth Onset Delay Range (frames)** (`delays`): `0-Inf` (default `0`) -- **Alias**: `SpawnerDispenser` +* **Name**: `SpawnerDispenserShort` +* **Size**: Fixed (`1.67 x 1.67 x 1.67`) +* **Spawned Goal Size Range**: `0.2-1` +* **Number of goals to spawn** (`spawnCounts`): `0-Inf` (leave blank or set to `-1` to spawn infinitely) +* **Color**: RGB range `(0,0,0)-(255,255,255)` +* **Growth Onset Delay Range (frames)** (`delays`): `0-Inf` (default `0`) +* **Alias**: `SpawnerDispenser` -Notes: The dispenser spawns `GoodGoalMulti`. The valence of the goals is proportional to their size. The number of seconds between spawnings (relative to the timescale of the environment) can be set with the `timesBetweenSpawns` parameter (default: 1.5). The object has a door that can be animated to open and close. The number of seconds before the door opens can be set with the `doorDelays` parameter (default: `10.0`), and the number of seconds the door remains open for can be ste with the `timesBetweenDoorOpens` parameter (default: `-1`, if `< 0` then, once opened, the door stays open permanently). +Notes: The dispenser spawns `GoodGoalMulti` . The valence of the goals is proportional to their size. The number of seconds between spawnings (relative to the timescale of the environment) can be set with the `timesBetweenSpawns` parameter (default: 1.5). The object has a door that can be animated to open and close. The number of seconds before the door opens can be set with the `doorDelays` parameter (default: `10.0` ), and the number of seconds the door remains open for can be ste with the `timesBetweenDoorOpens` parameter (default: `-1` , if `< 0` then, once opened, the door stays open permanently). ### SpawnerButton + -- **Name**: `SpawnerButton` -- **Size**: Fixed -- **Spawned Goal Size**: `1` -- **Color**: Not changeable -- **Alias**: `Pillar-Button` +* **Name**: `SpawnerButton` +* **Size**: Fixed +* **Spawned Goal Size**: `1` +* **Color**: Not changeable +* **Alias**: `Pillar-Button` -Notes: Spawns a goal when the player/agent *interacts* with it by colliding with the physical object. The position of the spawned goal can be set with the a `!Vector3` passed to the `rewardSpawnPos` parameter. The probability that a goal will spawn upon a press can be set with a float between 0 and 1 passed to the `spawnProbability` parameter. Different valenced objects can be spawned on different presses. A list, such as `["GoodGoal", "BadGoal", "GoodGoalMulti"]`, can be passed to `rewardNames` to define the valenced objects (only these three are supported at the moment). A corresponding list of floats between 0 and 1 can be passed to the `rewardWeights` to determine the probability of spawning each of the types of valenced object. The probabilities are normalized to sum to one. The number of frames taken for the button to depress upon touching it can be defined with `moveDurations`, and the number of frames for the button to be reset before it can be pressed again can be set with `resetDurations`. +Notes: Spawns a goal when the player/agent *interacts* with it by colliding with the physical object. The position of the spawned goal can be set with the a `!Vector3` passed to the `rewardSpawnPos` parameter. The probability that a goal will spawn upon a press can be set with a float between 0 and 1 passed to the `spawnProbability` parameter. Different valenced objects can be spawned on different presses. A list, such as `["GoodGoal", "BadGoal", "GoodGoalMulti"]` , can be passed to `rewardNames` to define the valenced objects (only these three are supported at the moment). A corresponding list of floats between 0 and 1 can be passed to the `rewardWeights` to determine the probability of spawning each of the types of valenced object. The probabilities are normalized to sum to one. The number of frames taken for the button to depress upon touching it can be defined with `moveDurations` , and the number of frames for the button to be reset before it can be pressed again can be set with `resetDurations` . ## Sign Boards +

-

\ No newline at end of file +

diff --git a/docs/Background-Cognitive-Science.md b/docs/Background-Cognitive-Science.md index bab7938b8..db43d2b51 100644 --- a/docs/Background-Cognitive-Science.md +++ b/docs/Background-Cognitive-Science.md @@ -1,19 +1,20 @@ # Background: Cognitive Science #### Table of Contents -- [Introduction to Cognitive Science](#Introduction-to-Cognitive-Science) - - [What is Cognitive Science?](#What-is-Cognitive-Science?) - - [Key Areas in Cognitive Science](#Key-Areas-in-Cognitive-Science) + +* [Introduction to Cognitive Science](#Introduction-to-Cognitive-Science) + + [What is Cognitive Science?](#What-is-Cognitive-Science?) + + [Key Areas in Cognitive Science](#Key-Areas-in-Cognitive-Science) - [Perception and Sensation](#Perception-and-Sensation) - [Memory and Learning](#Memory-and-Learning) - [Language and Cognition](#Language-and-Cognition) - [Decision Making and Problem Solving](#Decision-Making-and-Problem-Solving) - [Computational Models](#Computational-Models) - [Neuroscience](#Neuroscience) - - [Cognitive Science in Practice](#Cognitive-Science-in-Practice) - - [Conclusion](#Conclusion) - + + [Cognitive Science in Practice](#Cognitive-Science-in-Practice) + + [Conclusion](#Conclusion) + # Introduction to Cognitive Science Cognitive Science is an interdisciplinary field that explores the nature of cognition, encompassing a wide range of topics from the workings of the brain to the processes of thinking and learning. This document aims to provide a foundational overview of cognitive science and its various research areas. @@ -28,46 +29,46 @@ Cognitive Science is the scientific study of the mind and its processes, includi Understanding how humans perceive the world around them is a fundamental aspect of cognitive science. This includes the study of sensory systems and perceptual processes. -- **Resource**: [Sensation and Perception](https://en.wikipedia.org/wiki/Sensory_processing) +* **Resource**: [Sensation and Perception](https://en.wikipedia.org/wiki/Sensory_processing) ### Memory and Learning Memory and learning are central to cognitive science, encompassing how we encode, store, and retrieve information, and how learning occurs. -- **Resource**: [Memory](https://en.wikipedia.org/wiki/Memory), [Learning](https://en.wikipedia.org/wiki/Learning) +* **Resource**: [Memory](https://en.wikipedia.org/wiki/Memory), [Learning](https://en.wikipedia.org/wiki/Learning) ### Language and Cognition The study of language in cognitive science involves understanding how language is processed and produced, and its role in cognition. -- **Resource**: [Language Processing](https://en.wikipedia.org/wiki/Language_processing_in_the_brain) +* **Resource**: [Language Processing](https://en.wikipedia.org/wiki/Language_processing_in_the_brain) ### Decision Making and Problem Solving This area focuses on how individuals make decisions, solve problems, and think critically. -- **Resource**: [Decision Making](https://en.wikipedia.org/wiki/Decision-making) +* **Resource**: [Decision Making](https://en.wikipedia.org/wiki/Decision-making) ### Computational Models Cognitive science uses computational models to simulate and understand cognitive processes. -- **Resource**: [Computational Cognition](https://en.wikipedia.org/wiki/Computational_cognition) +* **Resource**: [Computational Cognition](https://en.wikipedia.org/wiki/Computational_cognition) ### Neuroscience Neuroscience in cognitive science explores the neural basis of cognitive processes. -- **Resource**: [Cognitive Neuroscience](https://en.wikipedia.org/wiki/Cognitive_neuroscience) +* **Resource**: [Cognitive Neuroscience](https://en.wikipedia.org/wiki/Cognitive_neuroscience) ## Cognitive Science in Practice Cognitive science has practical applications in areas such as artificial intelligence, human-computer interaction, education, and mental health. -- **AI and Machine Learning**: Applying cognitive principles to develop intelligent systems. -- **Human-Computer Interaction**: Designing user interfaces that align with human cognitive processes. -- **Educational Technology**: Enhancing learning experiences based on cognitive research. -- **Mental Health**: Understanding and treating cognitive disorders. +* **AI and Machine Learning**: Applying cognitive principles to develop intelligent systems. +* **Human-Computer Interaction**: Designing user interfaces that align with human cognitive processes. +* **Educational Technology**: Enhancing learning experiences based on cognitive research. +* **Mental Health**: Understanding and treating cognitive disorders. ## Conclusion @@ -75,4 +76,4 @@ Cognitive Science offers profound insights into the workings of the human mind a --- -**N.B:** _This document provides a concise yet comprehensive overview of cognitive science, designed to introduce newcomers to the field and its diverse research areas. We hope it has provided you with a solid understanding of the fundamental concepts of cognitive science, as understanding the workings of the human mind is essential enhancing the capabilities of artificial intelligence, which is the primary goal of Animal-AI as a research platform._ +**N. B:** _This document provides a concise yet comprehensive overview of cognitive science, designed to introduce newcomers to the field and its diverse research areas. We hope it has provided you with a solid understanding of the fundamental concepts of cognitive science, as understanding the workings of the human mind is essential enhancing the capabilities of artificial intelligence, which is the primary goal of Animal-AI as a research platform._ diff --git a/docs/Background-Machine-Learning.md b/docs/Background-Machine-Learning.md index e952966a6..718e5379a 100644 --- a/docs/Background-Machine-Learning.md +++ b/docs/Background-Machine-Learning.md @@ -1,17 +1,18 @@ # Background: Machine Learning #### Table of Contents -- [Introduction to Machine Learning Concepts](#Introduction-to-Machine-Learning-Concepts) - - [What is Machine Learning?](#What-is-Machine-Learning?) - - [Types of Machine Learning Algorithms](#Types-of-Machine-Learning-Algorithms) + +* [Introduction to Machine Learning Concepts](#Introduction-to-Machine-Learning-Concepts) + + [What is Machine Learning?](#What-is-Machine-Learning?) + + [Types of Machine Learning Algorithms](#Types-of-Machine-Learning-Algorithms) - [Unsupervised Learning](#Unsupervised-Learning) - [Supervised Learning](#Supervised-Learning) - [Reinforcement Learning](#Reinforcement-Learning) - - [Deep Learning in Machine Learning](#Deep-Learning-in-Machine-Learning) - - [Training and Inference in ML](#Training-and-Inference-in-ML) - - [Conclusion](#Conclusion) - + + [Deep Learning in Machine Learning](#Deep-Learning-in-Machine-Learning) + + [Training and Inference in ML](#Training-and-Inference-in-ML) + + [Conclusion](#Conclusion) + # Introduction to Machine Learning Concepts This document aims to provide an accessible overview of Machine Learning (ML) for those new to the field, particularly users of the ML-Agents Toolkit. While we won't cover machine learning exhaustively, we'll touch upon its key aspects, as numerous comprehensive resources are available online. @@ -46,4 +47,4 @@ All ML branches involve a training phase, where the model is built using provide Machine Learning's diverse algorithms and applications make it a fascinating and impactful field. Understanding its core concepts is essential for anyone looking to explore AI or utilize tools like the ML-Agents Toolkit for game development and beyond. ---- \ No newline at end of file +--- diff --git a/docs/Background-Unity.md b/docs/Background-Unity.md index 2085a36f3..2fcabcfeb 100644 --- a/docs/Background-Unity.md +++ b/docs/Background-Unity.md @@ -1,17 +1,18 @@ # Background: Unity #### Table of Contents -- [Unity Engine: Fundamental Concepts](#Unity-Engine:-Fundamental-Concepts) - - [Unity Editor](#Unity-Editor) - - [Prefabs](#Prefabs) - - [Scripting](#Scripting) - - [Physics](#Physics) - - [Animation](#Animation) - - [Asset Management](#Asset-Management) - - [User Interface (UI)](#User-Interface-(UI)) - - [Conclusion](#Conclusion) - +* [Unity Engine: Fundamental Concepts](#Unity-Engine:-Fundamental-Concepts) + + [Unity Editor](#Unity-Editor) + + [Prefabs](#Prefabs) + + [Scripting](#Scripting) + + [Physics](#Physics) + + [Animation](#Animation) + + [Asset Management](#Asset-Management) + + [User Interface (UI)](#User-Interface-(UI)) + + [Conclusion](#Conclusion) + + # Unity Engine: Fundamental Concepts Unity is a powerful game development engine that offers a rich set of features for creating immersive 2D and 3D games. This document provides an overview of the fundamental concepts of the Unity Engine, complete with links to resources for deeper understanding. We try to provide a comprehensive overview of the Unity Engine, but we recommend that you refer to the [Unity Documentation](https://docs.unity3d.com/Manual/index.html) for more detailed information. @@ -22,55 +23,54 @@ Unity is a powerful game development engine that offers a rich set of features f

- ## Unity Editor The Unity Editor is the core interface where game development takes place. It provides a user-friendly environment for building game scenes, adding assets, and scripting behavior. -- **Overview**: [Unity Editor Overview](https://docs.unity3d.com/Manual/UsingTheEditor.html) -- **Interface Guide**: [Unity Editor Interface](https://learn.unity.com/tutorial/unity-editor-interface-overview) +* **Overview**: [Unity Editor Overview](https://docs.unity3d.com/Manual/UsingTheEditor.html) +* **Interface Guide**: [Unity Editor Interface](https://learn.unity.com/tutorial/unity-editor-interface-overview) ## Prefabs Prefabs in Unity are pre-configured templates of game objects that can be reused across your projects. They are essential for efficient game development. -- **Introduction to Prefabs**: [Working with Prefabs](https://docs.unity3d.com/Manual/Prefabs.html) -- **Prefab Workflow**: [Prefab Workflow Guide](https://learn.unity.com/tutorial/introduction-to-prefabs) +* **Introduction to Prefabs**: [Working with Prefabs](https://docs.unity3d.com/Manual/Prefabs.html) +* **Prefab Workflow**: [Prefab Workflow Guide](https://learn.unity.com/tutorial/introduction-to-prefabs) ## Scripting Scripting in Unity is primarily done using C#. It allows you to define the behavior of your game objects, control game logic, and interact with user inputs. -- **Scripting Overview**: [Unity Scripting API](https://docs.unity3d.com/ScriptReference/) -- **C# Scripting Tutorial**: [C# Scripting in Unity](https://learn.unity.com/tutorial/introduction-to-scripting) +* **Scripting Overview**: [Unity Scripting API](https://docs.unity3d.com/ScriptReference/) +* **C# Scripting Tutorial**: [C# Scripting in Unity](https://learn.unity.com/tutorial/introduction-to-scripting) ## Physics Unity's physics engine allows for realistic simulation of physical interactions between objects in the game world. -- **Physics System**: [Unity Physics System](https://docs.unity3d.com/Manual/PhysicsSection.html) -- **Rigidbody and Colliders**: [Using Rigidbody and Colliders](https://learn.unity.com/tutorial/physics-rigidbodies-and-colliders) +* **Physics System**: [Unity Physics System](https://docs.unity3d.com/Manual/PhysicsSection.html) +* **Rigidbody and Colliders**: [Using Rigidbody and Colliders](https://learn.unity.com/tutorial/physics-rigidbodies-and-colliders) ## Animation Unity provides a comprehensive system for animating characters and objects, offering tools for creating detailed animations and controlling them via scripts. -- **Animation Overview**: [Unity Animation System](https://docs.unity3d.com/Manual/AnimationOverview.html) -- **Animator Component**: [Using the Animator](https://learn.unity.com/tutorial/animator-component) +* **Animation Overview**: [Unity Animation System](https://docs.unity3d.com/Manual/AnimationOverview.html) +* **Animator Component**: [Using the Animator](https://learn.unity.com/tutorial/animator-component) ## Asset Management Managing assets is a crucial part of game development in Unity. Unity supports a wide range of asset types including 3D models, textures, audio, and more. -- **Asset Management**: [Unity Asset Workflow](https://docs.unity3d.com/Manual/AssetWorkflow.html) -- **Asset Store**: [Unity Asset Store](https://assetstore.unity.com/) +* **Asset Management**: [Unity Asset Workflow](https://docs.unity3d.com/Manual/AssetWorkflow.html) +* **Asset Store**: [Unity Asset Store](https://assetstore.unity.com/) ## User Interface (UI) Unity's UI system allows you to create interactive and intuitive user interfaces for your games. -- **UI Overview**: [Unity UI System](https://docs.unity3d.com/Manual/UISystem.html) -- **UI Toolkit**: [Using Unity's UI Toolkit](https://learn.unity.com/tutorial/introduction-to-the-new-ui-system) +* **UI Overview**: [Unity UI System](https://docs.unity3d.com/Manual/UISystem.html) +* **UI Toolkit**: [Using Unity's UI Toolkit](https://learn.unity.com/tutorial/introduction-to-the-new-ui-system) ## Conclusion diff --git a/docs/Background-YAML.md b/docs/Background-YAML.md index 670056f2f..593d093b5 100644 --- a/docs/Background-YAML.md +++ b/docs/Background-YAML.md @@ -1,19 +1,19 @@ # Background: YAML #### Table of Contents -- [Background: YAML](#background-yaml) + +* [Background: YAML](#background-yaml) - [Table of Contents](#table-of-contents) - - [YAML? What is it?](#yaml-what-is-it) - - [Configuration of Training Environments and Agents](#configuration-of-training-environments-and-agents) + + [YAML? What is it?](#yaml-what-is-it) + + [Configuration of Training Environments and Agents](#configuration-of-training-environments-and-agents) - [Defining Agent Behaviors](#defining-agent-behaviors) - [Setting Hyperparameters for Training](#setting-hyperparameters-for-training) - - [Example of a YAML Configuration in ML-Agents](#example-of-a-yaml-configuration-in-ml-agents) - - [Example of a YAML Configuration in Animal-AI](#example-of-a-yaml-configuration-in-animal-ai) - - [Advantages in Animal-AI Context](#advantages-in-animal-ai-context) + + [Example of a YAML Configuration in ML-Agents](#example-of-a-yaml-configuration-in-ml-agents) + + [Example of a YAML Configuration in Animal-AI](#example-of-a-yaml-configuration-in-animal-ai) + + [Advantages in Animal-AI Context](#advantages-in-animal-ai-context) - [Easy to Read and Modify](#easy-to-read-and-modify) - [Facilitating Complex Configurations](#facilitating-complex-configurations) - ## YAML? What is it? *YAML* (YAML Ain't Markup Language) is a data serialization format widely used in Unity and other setups (as well as in ML-Agents) for its readability and ease of use. It allows developers and researchers to define and adjust the behavior and training parameters of AI agents within Unity simulations. Due to it's human-readable format, YAML is also useful for researchers who have little experience with programming. @@ -51,6 +51,7 @@ behaviors: time_horizon: 64 summary_freq: 10000 ``` + The above example shows a YAML configuration file for a soccer player agent in ML-Agents, for agent training. The agent's behavior is defined by the `SoccerPlayer` behavior name. The `trainer_type` parameter specifies the type of training algorithm used to train the agent. The `hyperparameters` section defines the hyperparameters for the training process. The `network_settings` section defines the neural network architecture for the agent. The `reward_signals` section defines the reward signals used to train the agent. The `max_steps` parameter defines the maximum number of steps the agent can take in the environment. The `time_horizon` parameter defines the number of steps the agent can take before the environment is reset. The `summary_freq` parameter defines the frequency at which the agent's training progress is logged. ## Example of a YAML Configuration in Animal-AI @@ -78,6 +79,7 @@ arenas: spawnProbability: 1.0 rewardSpawnPos: !Vector3 {x: 20, y: 0, z: 35} ``` + The above example shows a YAML configuration file for an arena in Animal-AI. The `!ArenaConfig` tag indicates that the file is a configuration file for an arena. The `arenas` section defines the arenas in the environment. The `0` tag indicates that the arena is the first arena in the environment. The `pass_mark` parameter defines the minimum score required to pass the arena. The `t` parameter defines the maximum number of steps the agent can take in the arena. The `items` section defines the objects in the arena. The `name` parameter defines the name of the object. The `positions` parameter defines the positions of the object. The `rotations` parameter defines the rotations of the object. The `moveDurations` parameter defines the durations of the object's movements. The `resetDurations` parameter defines the durations of the object's resets. The `rewardNames` parameter defines the names of the object's rewards. The `rewardWeights` parameter defines the weights of the object's rewards. The `spawnProbability` parameter defines the probability of the object spawning. The `rewardSpawnPos` parameter defines the position of the object's reward. ## Advantages in Animal-AI Context @@ -90,4 +92,4 @@ YAML's human-readable format makes it easier for developers and researchers to u The structure of YAML supports complex configurations with nested parameters, allowing for clear hierarchies and groupings of settings in Animal-AI. This makes it easier for developers to organize and modify their configurations. ---- \ No newline at end of file +--- diff --git a/docs/CHANGELOG.md b/docs/CHANGELOG.md index db150cff5..6ea4f4525 100644 --- a/docs/CHANGELOG.md +++ b/docs/CHANGELOG.md @@ -8,108 +8,129 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) a ### Added - ### Changed - ### Fixed - --- + ## [3.1.3] - 30.09.2023 + ### Fixed -- Resolved Spawner Tree Clock desync issue. -- Resolved Multiple Arenas improper cycling issue. -- Addressed Unity native warning in Training Arena script. -- Rectified the invisibility issue of the SignBoard prefab. +* Resolved Spawner Tree Clock desync issue. +* Resolved Multiple Arenas improper cycling issue. +* Addressed Unity native warning in Training Arena script. +* Rectified the invisibility issue of the SignBoard prefab. ### Added -- Enhanced arenas randomization via the `randomizeArenas` parameter in YAML. -- Added more robust error-checking for arena ID's and arena cycling. -- Conducted unit tests on `TrainingArena.cs` and `ArenaParameters.cs`. -- Added visual elements of paired reinforcing cues such as colours and short GIFs for better visual understanding for the user. + +* Enhanced arenas randomization via the `randomizeArenas` parameter in YAML. +* Added more robust error-checking for arena ID's and arena cycling. +* Conducted unit tests on `TrainingArena.cs` and `ArenaParameters.cs`. +* Added visual elements of paired reinforcing cues such as colours and short GIFs for better visual understanding for the user. ### Changed -- Shortened end-of-episode notification to *2.5 seconds*. -- Undertook minor Unity script optimizations. -- Updated the `README.md` file with more detailed instructions. + +* Shortened end-of-episode notification to *2.5 seconds*. +* Undertook minor Unity script optimizations. +* Updated the `README.md` file with more detailed instructions. ## [3.1.2.exp1] - 11.09.2023 + ### Fixed -- Implemented hot fix for a newly discovered bug affecting the Spawner Tree. +* Implemented hot fix for a newly discovered bug affecting the Spawner Tree. ## [3.1.1] - 10.08.2023 + ### Added -- Introduced "End of Episode Notification" feature. -- Supported "Headless" mode unofficially for training agents (works with Raycasting). +* Introduced "End of Episode Notification" feature. +* Supported "Headless" mode unofficially for training agents (works with Raycasting). ### Fixed -- Fixed bug affecting the Spawner Tree. -- Fixed bug affecting the Interactive Button. + +* Fixed bug affecting the Spawner Tree. +* Fixed bug affecting the Interactive Button. ## [3.1.0] + ### Added -- Introduced "Interactive Button" feature. +* Introduced "Interactive Button" feature. ## [3.0.2] + ### Changed -- Upgraded Mlagents to 2.3.0-exp3 (mlagents python version 0.30.0). +* Upgraded Mlagents to 2.3.0-exp3 (mlagents python version 0.30.0). ## [3.0.1] + ### Added -- Added Agent Freezing Parameter. +* Added Agent Freezing Parameter. ## [3.0] + ### Changed -- Updated agent handling for improved stop and acceleration. -- Added new objects, spawners, signs, goal types. -- Updated graphics for many objects. -- Made the Unity Environment available. -- Upgraded to Mlagents 2.1.0-exp.1 (ml-agents python version 0.27.0). +* Updated agent handling for improved stop and acceleration. +* Added new objects, spawners, signs, goal types. +* Updated graphics for many objects. +* Made the Unity Environment available. +* Upgraded to Mlagents 2.1.0-exp.1 (ml-agents python version 0.27.0). ### Fixed -- Various bug fixes. + +* Various bug fixes. ### Note -- Due to changes to controls and graphics, agents trained on previous versions might not perform the same. + +* Due to changes to controls and graphics, agents trained on previous versions might not perform the same. ## [2.2.3] + ### Added -- Ability to specify multiple different arenas in a single YAML config file. +* Ability to specify multiple different arenas in a single YAML config file. ## [2.2.2] + ### Changed -- Introduced a low-quality version with improved fps. +* Introduced a low-quality version with improved fps. ## [2.2.1] + ### Fixed -- Improved UI scaling with respect to screen size. -- Fixed an issue with cardbox objects spawning at the wrong sizes. -- Fixed an issue where the environment would time out incorrectly. +* Improved UI scaling with respect to screen size. +* Fixed an issue with cardbox objects spawning at the wrong sizes. +* Fixed an issue where the environment would time out incorrectly. ### Changed -- Improved Death Zone shader for unusual Zone sizes. + +* Improved Death Zone shader for unusual Zone sizes. ## [2.2.0] + ### Added -- Switched to health-based system. -- Added basic Gym Wrapper. -- Added basic heuristic agent for benchmarking and testing. +* Switched to health-based system. +* Added basic Gym Wrapper. +* Added basic heuristic agent for benchmarking and testing. ### Fixed -- Fixed a reset environment bug during training. -- Added the ability to set the DecisionPeriod (frameskip) when instantiating an environment. + +* Fixed a reset environment bug during training. +* Added the ability to set the DecisionPeriod (frameskip) when instantiating an environment. ## [2.1.1] - 01.07.2021 + ### Added -- RayCast Observations +* RayCast Observations + ### Fixed -- Fixed raycast length being less than diagonal length of standard arena. + +* Fixed raycast length being less than diagonal length of standard arena. ## [2.1] - Beta Release 2019 + ### Added -- Raycast observations. -- Agent global position to observations. +* Raycast observations. +* Agent global position to observations. ### Changed -- Upgraded to ML-Agents release 2 (0.26.0). + +* Upgraded to ML-Agents release 2 (0.26.0). diff --git a/docs/FAQ.md b/docs/FAQ.md index 1fd1a01b6..9eee9972f 100644 --- a/docs/FAQ.md +++ b/docs/FAQ.md @@ -1,78 +1,95 @@ # Frequently Asked Questions + This document provides a comprehensive list of frequently asked questions and troubleshooting tips for the Animal-AI environment. #### Table of Contents - - [1. Troubleshooting Installation Issues](#1-troubleshooting-installation-issues) + + + [1. Troubleshooting Installation Issues](#1-troubleshooting-installation-issues) - [1.1 Resolving Environment Permission Errors](#11-resolving-environment-permission-errors) - [1.1.1 For macOS and Linux Users](#111-for-macos-and-linux-users) - [1.1.2 For Windows Users](#112-for-windows-users) - [1.2 Addressing Environment Connection Timeouts](#12-addressing-environment-connection-timeouts) - [1.3 Communication Port Conflict](#13-communication-port-conflict) - [1.4 Mean Reward Displaying NaN](#14-mean-reward-displaying-nan) - - [2. Python API / Package Dependency Issues](#2-python-api--package-dependency-issues) + + [2. Python API / Package Dependency Issues](#2-python-api--package-dependency-issues) - [2.1 No Module Named `animalai`](#21-no-module-named-animalai) - [2.3 Incompatible Python Version](#23-incompatible-python-version) - - [3. File Not Found Error](#3-file-not-found-error) + + [3. File Not Found Error](#3-file-not-found-error) ## 1. Troubleshooting Installation Issues + Encountering issues while installing the Animal-AI environment? Here are some solutions to common problems: ### 1.1 Resolving Environment Permission Errors + #### 1.1.1 For macOS and Linux Users Permission errors after importing a Unity environment? Adjust file permissions with these commands: **macOS:** + ```sh chmod -R 755 *.app ``` **Linux:** + ```sh chmod -R 755 *.x86_64 ``` #### 1.1.2 For Windows Users + Windows users generally don't need additional permissions. If needed, refer to [Microsoft Documentation](https://docs.microsoft.com/). ### 1.2 Addressing Environment Connection Timeouts -Timeout errors when launching through `UnityEnvironment`? Consider these fixes: -- **No Agent in Scene:** Ensure an agent is in the scene. -- **Firewall Issues on macOS:** Follow [Apple's instructions](https://support.apple.com/) to add exceptions. -- **Errors in Unity Environment:** Refer to [Unity log files](https://docs.unity3d.com/Manual/LogFiles.html). -- **Running in a Headless Environment:** Use `--no-graphics` or `no_graphics=True` if you intend on using this feature (not fully supported). +Timeout errors when launching through `UnityEnvironment` ? Consider these fixes: + +* **No Agent in Scene:** Ensure an agent is in the scene. +* **Firewall Issues on macOS:** Follow [Apple's instructions](https://support.apple.com/) to add exceptions. +* **Errors in Unity Environment:** Refer to [Unity log files](https://docs.unity3d.com/Manual/LogFiles.html). +* **Running in a Headless Environment:** Use `--no-graphics` or `no_graphics=True` if you intend on using this feature (not fully supported). ### 1.3 Communication Port Conflict + Encountering port conflicts? Try changing the worker number or port: ```python UnityEnvironment(file_name=filename, worker_id=X) ``` + Or find an available port: + ```python port = 5005 + random.randint(0, 1000) ``` ### 1.4 Mean Reward Displaying NaN -Seeing `Mean reward : nan`? Set the `Max Steps` to a non-zero value or script custom termination conditions. +Seeing `Mean reward : nan` ? Set the `Max Steps` to a non-zero value or script custom termination conditions. ## 2. Python API / Package Dependency Issues + Encountering issues with the Python API or package dependencies? Here are some solutions to common problems: ### 2.1 No Module Named `animalai` -Seeing `ModuleNotFoundError: No module named 'animalai'`? Ensure the `animalai` package is installed: + +Seeing `ModuleNotFoundError: No module named 'animalai'` ? Ensure the `animalai` package is installed: ```sh pip install animalai ``` + or if you are using a virtual environment: + ```sh pip install animalai --user ``` + or conda: ```sh conda install -c conda-forge animalai + ``` Please do not forget to activate your environment before installing the package. @@ -80,10 +97,13 @@ You can verify the installation by running: ```sh python -c "import animalai" ``` + ### 2.3 Incompatible Python Version + Currently, the Animal-AI environment only supports **Python 3.6 to 3.9.** We have tested using 3.6, 3.7 and 3.8, but we cannot guarantee that it will work with these versions for everyone. If you are using a different version of Python, please install Python 3.9 for the optimal experience. Please verify that you are using the correct version of Python by running: + ```sh python --version ``` @@ -92,8 +112,8 @@ If you are using a different version of Python, please install Python 3.9 for th ## 3. File Not Found Error -Seeing `FileNotFoundError: [Errno 2] No such file or directory: 'AnimalAI/AnimalAI.app'`? Ensure the `AnimalAI` folder is in the same directory as your Python script. +Seeing `FileNotFoundError: [Errno 2] No such file or directory: 'AnimalAI/AnimalAI.app'` ? Ensure the `AnimalAI` folder is in the same directory as your Python script. -If you are using macOS, you may get this error: `FileNotFoundError: [Errno 2] No such file or directory: 'env/AnimalAI'`. This error occurs when running the ``python play.py`` command from the ``animal-ai/examples`` folder. +If you are using macOS, you may get this error: `FileNotFoundError: [Errno 2] No such file or directory: 'env/AnimalAI'` . This error occurs when running the ` `python play.py` ` command from the ` `animal-ai/examples` ` folder. -To fix this, simply rename the 'MACOS.app' folder you downloaded to AnimalAI. This will allow the ``play.py`` script to find the environment. Note that this error is likely to occur in older versions of Animal-AI. \ No newline at end of file +To fix this, simply rename the 'MACOS.app' folder you downloaded to AnimalAI. This will allow the ` `play.py` ` script to find the environment. Note that this error is likely to occur in older versions of Animal-AI. diff --git a/docs/Glossary.md b/docs/Glossary.md index 763613bfd..a624e9516 100644 --- a/docs/Glossary.md +++ b/docs/Glossary.md @@ -1,6 +1,7 @@ # Animal-AI Glossary #### Table of Contents + 1. [Animal-AI Terms](#animal-ai-terms) 2. [RL/ML-Agents Terms](#rlml-agents-terms) 3. [Unity Terms](#unity-terms) @@ -8,48 +9,46 @@ ## Animal-AI Terms -- **Animal-AI**: The Animal-AI environment, encompassing the Unity environment, Python API, and associated documentation. -- **Episode**: A single run of the environment, starting with a reset and ending with a failure or success. -- **Arena**: The area in which the agent is placed at the start of an episode, which is synonymous with the environment/episode. - +* **Animal-AI**: The Animal-AI environment, encompassing the Unity environment, Python API, and associated documentation. +* **Episode**: A single run of the environment, starting with a reset and ending with a failure or success. +* **Arena**: The area in which the agent is placed at the start of an episode, which is synonymous with the environment/episode. ## RL/ML-Agents Terms -- **Academy**: A singleton object controlling the timing, reset, and training/inference settings of the environment. -- **Action**: The execution of a decision by an agent within the environment. -- **Agent**: A Unity Component that generates observations and takes actions in the environment, based on decisions from a Policy. -- **Decision**: The output of a Policy, specifying an action in response to an observation. -- **Editor**: The Unity Editor, encompassing various panes like Hierarchy, Scene, Inspector. -- **Environment**: The Unity scene containing Agents. -- **Experience**: A tuple [Agent observations, actions, rewards] representing a single Agent's data after a Step. -- **FixedUpdate**: A Unity method called at each step of the game engine, where ML-Agents logic is typically executed. -- **Frame**: An instance of rendering by the main camera, corresponding to each `Update` call in the game engine. -- **Observation**: Information available to an agent about the environment's state (e.g., Vector, Visual). -- **Policy**: The decision-making mechanism (often a neural network) that produces decisions from observations. -- **Reward**: A signal indicating the desirability of an agent’s action within the current environment state. -- **State**: The underlying properties of the environment and all agents within it at a given time. -- **Step**: An atomic change in the engine occurring between Agent decisions. -- **Trainer**: A Python class responsible for training a group of Agents. -- **Update**: A Unity function called at each frame rendering. ML-Agents logic is typically not executed here. +* **Academy**: A singleton object controlling the timing, reset, and training/inference settings of the environment. +* **Action**: The execution of a decision by an agent within the environment. +* **Agent**: A Unity Component that generates observations and takes actions in the environment, based on decisions from a Policy. +* **Decision**: The output of a Policy, specifying an action in response to an observation. +* **Editor**: The Unity Editor, encompassing various panes like Hierarchy, Scene, Inspector. +* **Environment**: The Unity scene containing Agents. +* **Experience**: A tuple [Agent observations, actions, rewards] representing a single Agent's data after a Step. +* **FixedUpdate**: A Unity method called at each step of the game engine, where ML-Agents logic is typically executed. +* **Frame**: An instance of rendering by the main camera, corresponding to each `Update` call in the game engine. +* **Observation**: Information available to an agent about the environment's state (e.g., Vector, Visual). +* **Policy**: The decision-making mechanism (often a neural network) that produces decisions from observations. +* **Reward**: A signal indicating the desirability of an agent’s action within the current environment state. +* **State**: The underlying properties of the environment and all agents within it at a given time. +* **Step**: An atomic change in the engine occurring between Agent decisions. +* **Trainer**: A Python class responsible for training a group of Agents. +* **Update**: A Unity function called at each frame rendering. ML-Agents logic is typically not executed here. ## Unity Terms -- **Unity Objects**: Fundamental components in the Unity Engine, serving as containers for all other components or functionalities within a Unity scene. -- **GameObjects**: Core elements in Unity, representing characters, props, scenery, cameras, lights, etc. -- **Prefabs**: Templates created from GameObjects in Unity, allowing for reuse and consistency across scenes or projects. -- **Immovable Objects**: Objects in the Animal-AI environment that are fixed in place and cannot be moved, like walls and ramps. -- **Movable Objects**: Objects that can be easily moved by the agent or other objects in the environment. -- **Rewards**: Objects providing positive or negative feedback to the agent, including stationary and moving goals. -- **Reward Spawners**: Objects with the primary function of spawning rewards in the environment. -- **Environment Permission Errors**: Issues related to file permissions when importing a Unity environment, particularly on macOS and Linux. -- **Environment Connection Timeouts**: Problems encountered when launching the Animal-AI environment through `UnityEnvironment`. -- **Communication Port Conflict**: Issues arising from port conflicts when initializing the Unity environment. -- **Mean Reward Displaying NaN**: A scenario where the mean reward metric shows 'nan', indicating an issue with the environment setup or configuration. +* **Unity Objects**: Fundamental components in the Unity Engine, serving as containers for all other components or functionalities within a Unity scene. +* **GameObjects**: Core elements in Unity, representing characters, props, scenery, cameras, lights, etc. +* **Prefabs**: Templates created from GameObjects in Unity, allowing for reuse and consistency across scenes or projects. +* **Immovable Objects**: Objects in the Animal-AI environment that are fixed in place and cannot be moved, like walls and ramps. +* **Movable Objects**: Objects that can be easily moved by the agent or other objects in the environment. +* **Rewards**: Objects providing positive or negative feedback to the agent, including stationary and moving goals. +* **Reward Spawners**: Objects with the primary function of spawning rewards in the environment. +* **Environment Permission Errors**: Issues related to file permissions when importing a Unity environment, particularly on macOS and Linux. +* **Environment Connection Timeouts**: Problems encountered when launching the Animal-AI environment through `UnityEnvironment`. +* **Communication Port Conflict**: Issues arising from port conflicts when initializing the Unity environment. +* **Mean Reward Displaying NaN**: A scenario where the mean reward metric shows 'nan', indicating an issue with the environment setup or configuration. ## YAML Terms -- **YAML File**: A file containing data in YAML format. -- **YAML Configuration File**: A YAML file containing configuration data for the Animal-AI environment. -- **YAML Configuration Name**: The name of a YAML configuration file, which is used to identify the configuration. Your custom configurations are used to create a new environment. -- **YAML Configuration Path**: The path to a YAML configuration file. - +* **YAML File**: A file containing data in YAML format. +* **YAML Configuration File**: A YAML file containing configuration data for the Animal-AI environment. +* **YAML Configuration Name**: The name of a YAML configuration file, which is used to identify the configuration. Your custom configurations are used to create a new environment. +* **YAML Configuration Path**: The path to a YAML configuration file. diff --git a/docs/Technical-Overview.md b/docs/Technical-Overview.md index 87bce5a50..2029a7880 100644 --- a/docs/Technical-Overview.md +++ b/docs/Technical-Overview.md @@ -5,11 +5,12 @@ This guide will walk you through the engineering aspects of the Animal-AI Enviro If you have any questions or issues, please check the [FAQ](docs/FAQ.md) and documentation before posting an issue on GitHub. #### Table of Contents -- [Running the Environment](#running-the-environment) - - [Play Mode](#play-mode) + +* [Running the Environment](#running-the-environment) + + [Play Mode](#play-mode) - [Controls in Play Mode](#controls-in-play-mode) - - [Train Mode](#train-mode) -- [Environment Overview](#environment-overview) + + [Train Mode](#train-mode) +* [Environment Overview](#environment-overview) - [Observations](#observations) - [Actions](#actions) - [Rewards](#rewards) @@ -17,7 +18,7 @@ If you have any questions or issues, please check the [FAQ](docs/FAQ.md) and doc - [Configuration Files](#configuration-files) - [Arena Files](#arena-files) - [Unity Editor](#unity-editor) -- [Training Agents](#training-agents) +* [Training Agents](#training-agents) - [Baselines](#baselines) - [Training Scripts](#training-scripts) - [Training Observations](#training-observations) @@ -25,16 +26,14 @@ If you have any questions or issues, please check the [FAQ](docs/FAQ.md) and doc - [Training Curriculum](#training-curriculum) - [Training Arena Files](#training-arena-files) - [Training Configuration Files](#training-configuration-files) -- [Testing Agents](#testing-agents) -- [Contributing](#contributing) -- [Citation](#citation) -- [License](#license) - +* [Testing Agents](#testing-agents) +* [Contributing](#contributing) +* [Citation](#citation) +* [License](#license) ## Running the Environment -The Animal-AI Environment can be run in one of two modes: `Play` and `Train`. In `Play` mode, the environment is run with a human player controlling the agent. In `Train` mode, the environment is run with an AI agent (see [Training Agents](#training-agents)). - +The Animal-AI Environment can be run in one of two modes: `Play` and `Train` . In `Play` mode, the environment is run with a human player controlling the agent. In `Train` mode, the environment is run with an AI agent (see [Training Agents](#training-agents)). ### Play Mode @@ -43,6 +42,7 @@ To run the environment in `Play` mode, simply run the Animal-AI application for ```bash animalai play configs/curriculum/0.yaml ``` + #### Controls in Play Mode In play mode, you can switch the camera view and control the agent using the following keyboard commands: @@ -57,7 +57,7 @@ In play mode, you can switch the camera view and control the agent using the fol | R | Reset environment | | Q | Quit application | -Toggle the camera between first-person, third-person, and bird's eye view using the `C` key. The agent can be controlled using `W`, `A`, `S`, `D` (or the arrow keys). Hitting `R` or collecting certain rewards (green or red) will reset the arena. Note that the camera and agent controls are not available in `Train` mode, with only third-person perspective implemented currently (we plan to add multiple camera observations during training at some point). Furthermore, you can toggle on/off the ability to restrict the player's camera angles via the `canChangePerspective` parameter in the configuration file. If this is set to false, then the player will not be able to change the camera angle. In addition, you can toggle on/off the ability to reset the arena via the `canResetArena` parameter in the configuration file. If this is set to false, then the player will not be able to reset the arena manually. A new feature added is that users can now toggle on/off Lastly, if you have multiple arenas specified in youur configuration file, you can randomize via the `randomizeArenas` parameter. This is false by default. +Toggle the camera between first-person, third-person, and bird's eye view using the `C` key. The agent can be controlled using `W` , `A` , `S` , `D` (or the arrow keys). Hitting `R` or collecting certain rewards (green or red) will reset the arena. Note that the camera and agent controls are not available in `Train` mode, with only third-person perspective implemented currently (we plan to add multiple camera observations during training at some point). Furthermore, you can toggle on/off the ability to restrict the player's camera angles via the `canChangePerspective` parameter in the configuration file. If this is set to false, then the player will not be able to change the camera angle. In addition, you can toggle on/off the ability to reset the arena via the `canResetArena` parameter in the configuration file. If this is set to false, then the player will not be able to reset the arena manually. A new feature added is that users can now toggle on/off Lastly, if you have multiple arenas specified in youur configuration file, you can randomize via the `randomizeArenas` parameter. This is false by default. ### Train Mode @@ -70,5 +70,3 @@ animalai train configs/curriculum/0.yaml ## Environment Overview Regardless on what mode you are using, the arena you specify in the configuration file will be loaded. The agent will be placed in the arena and the environment will run until the agent reaches the goal or the episode time limit is reached. The environment will then reset and the process will repeat. The order of the arenas in the configuration file will be used to determine the order in which the arenas are loaded. Take a look at the [Configuration Files](#configuration-files) section for more details on how to create your own configuration files. - - diff --git a/docs/Using-Jupyter-Notebooks.md b/docs/Using-Jupyter-Notebooks.md index 54d3cafe0..74bbfd271 100644 --- a/docs/Using-Jupyter-Notebooks.md +++ b/docs/Using-Jupyter-Notebooks.md @@ -3,87 +3,129 @@ This guide combines instructions on creating a custom kernel for Jupyter Notebooks and specific steps for using Jupyter Notebooks with the Animal-AI environment. It is advised that you either create a python virtual environment or Anaconda for easier project management. See [Using-Virtual-Environments](/docs/Using-Virtual-Environment.md) for more information on creating and using a virtual environment. For more information on Jupyter Notebooks, refer to the [official documentation](https://jupyter-notebook.readthedocs.io/en/stable/) #### Table of Contents -- [Creating a Kernel for Jupyter Notebooks](#creating-a-kernel-for-jupyter-notebooks) -- [Using Jupyter Notebooks with Animal-AI](#using-jupyter-notebooks-with-animal-ai) -- [Advantages of Using Jupyter Notebooks](#advantages-of-using-jupyter-notebooks) -- [Tips for Effective Jupyter Notebook Use](#tips-for-effective-jupyter-notebook-use) + +* [Creating a Kernel for Jupyter Notebooks](#creating-a-kernel-for-jupyter-notebooks) +* [Using Jupyter Notebooks with Animal-AI](#using-jupyter-notebooks-with-animal-ai) +* [Advantages of Using Jupyter Notebooks](#advantages-of-using-jupyter-notebooks) +* [Tips for Effective Jupyter Notebook Use](#tips-for-effective-jupyter-notebook-use) ## Creating a Kernel for Jupyter Notebooks ### Step-by-Step Guide + 1. **Install the IPython Kernel**: - ```bash + + +```bash pip install ipykernel ``` + 2. **Create a New Python Environment**: - Using venv: - ```bash + + + +```bash python -m venv /path/to/new/virtual/environment ``` + - Using Conda: - ```bash + + + +```bash conda create -n myenv python=3.x ``` + 3. **Activate the Environment**: - Using venv: - ```bash + + + +```bash source /path/to/new/virtual/environment/bin/activate ``` + - Using Conda: - ```bash + + + +```bash conda activate myenv ``` + 4. **Install Necessary Packages**: - ```bash + + +```bash pip install numpy pandas matplotlib ``` + 5. **Add Your Kernel to Jupyter**: - ```bash + + +```bash ipython kernel install --name "myenv" --user ``` + 6. **Launch Jupyter Notebook**: - ```bash + + +```bash jupyter notebook ``` + 7. **Select Your Kernel**: Choose "myenv" from the kernel list in Jupyter. ### Notes -- Replace placeholders with your desired directory and environment name. -- Adjust Python version as needed. + +* Replace placeholders with your desired directory and environment name. +* Adjust Python version as needed. ## Using Jupyter Notebooks with Animal-AI ### Introduction + _Jupyter Notebooks_ are interactive documents that combine live code, output, text, and visualizations. ### Setup + 1. **Install Jupyter**: - ```bash + + +```bash pip install notebook ``` + 2. **Start Jupyter Notebook**: - - Use `jupyter notebook` or JupyterLab (`jupyter lab`). + - Use `jupyter notebook` or JupyterLab ( `jupyter lab` ). ### Using with Animal-AI -- **Create a New Notebook**: In your project directory. -- **Import Animal-AI Package**: - ```python + +* **Create a New Notebook**: In your project directory. +* **Import Animal-AI Package**: + + +```python from animalai.envs.environment import AnimalAIEnvironment # Other necessary imports for your script ``` ### Writing Interactive Scripts -- **Initialize Environment**: Set up the Animal-AI environment. -- **Run Experiments**: Write code for experiments, training, or visualization. -- **Visualize Outputs**: Display results using Jupyter's capabilities. + +* **Initialize Environment**: Set up the Animal-AI environment. +* **Run Experiments**: Write code for experiments, training, or visualization. +* **Visualize Outputs**: Display results using Jupyter's capabilities. ## Advantages of Using Jupyter Notebooks -- **Interactivity**: Test code in small, independent blocks. -- **Documentation**: Combine code with rich text and visualizations. -- **Experimentation**: Ideal for testing new ideas and visualizing data. + +* **Interactivity**: Test code in small, independent blocks. +* **Documentation**: Combine code with rich text and visualizations. +* **Experimentation**: Ideal for testing new ideas and visualizing data. ## Tips for Effective Jupyter Notebook Use -- **Manage Resources**: Be aware of resource usage, especially when running complex simulations. -- **Kernel Management**: Restart the Jupyter kernel to clear memory and state if needed. -- **Version Control**: Export code to Python scripts for version control in larger projects. + +* **Manage Resources**: Be aware of resource usage, especially when running complex simulations. +* **Kernel Management**: Restart the Jupyter kernel to clear memory and state if needed. +* **Version Control**: Export code to Python scripts for version control in larger projects. diff --git a/docs/Using-Virtual-Environment.md b/docs/Using-Virtual-Environment.md index 5c3ff2699..c702af929 100644 --- a/docs/Using-Virtual-Environment.md +++ b/docs/Using-Virtual-Environment.md @@ -1,26 +1,29 @@ # Using Virtual Environments for Animal-AI #### Table of Contents -- [Introduction to Virtual Environments](#introduction-to-virtual-environments) -- [Benefits of Using a Virtual Environment](#benefits-of-using-a-virtual-environment) -- [Python Version Compatibility](#python-version-compatibility) -- [Setting Up Virtual Environments](#setting-up-virtual-environments) - - [Common Steps](#common-steps) - - [Mac OS X](#mac-os-x) - - [Ubuntu](#ubuntu) - - [Windows](#windows) -- [Introduction to Conda Environments](#introduction-to-conda-environments) -- [Setting Up Conda Environments](#setting-up-conda-environments) - - [Installing Conda](#installing-conda) - - [Creating a Conda Environment](#creating-a-conda-environment) - - [Managing Packages](#managing-packages) - - [Deactivating an Environment](#deactivating-an-environment) -- [Conclusion](#conclusion) + +* [Introduction to Virtual Environments](#introduction-to-virtual-environments) +* [Benefits of Using a Virtual Environment](#benefits-of-using-a-virtual-environment) +* [Python Version Compatibility](#python-version-compatibility) +* [Setting Up Virtual Environments](#setting-up-virtual-environments) + + [Common Steps](#common-steps) + + [Mac OS X](#mac-os-x) + + [Ubuntu](#ubuntu) + + [Windows](#windows) +* [Introduction to Conda Environments](#introduction-to-conda-environments) +* [Setting Up Conda Environments](#setting-up-conda-environments) + + [Installing Conda](#installing-conda) + + [Creating a Conda Environment](#creating-a-conda-environment) + + [Managing Packages](#managing-packages) + + [Deactivating an Environment](#deactivating-an-environment) +* [Conclusion](#conclusion) ## Introduction to Virtual Environments + A _Virtual Environment_ in Python is a self-contained directory that includes a specific version of Python and various packages. This isolated environment helps in managing project dependencies effectively. For more details, visit the [Python venv documentation](https://docs.python.org/3/library/venv.html) and [Anaconda documentation](https://docs.anaconda.com/). ## Benefits of Using a Virtual Environment + Using a Virtual Environment offers several advantages: 1. Simplifies dependency management for individual projects. 2. Facilitates testing with different versions of libraries, ensuring code compatibility. @@ -28,64 +31,79 @@ Using a Virtual Environment offers several advantages: 4. Allows for easy sharing of project requirements with collaborators. ## Python Version Compatibility -- This guide is compatible with Python version `3.9.9`. -- Using newer Python versions might lead to compatibility issues with some libraries. + +* This guide is compatible with Python version `3.9.9`. +* Using newer Python versions might lead to compatibility issues with some libraries. ## Setting Up Virtual Environments ### Common Steps + 1. **Install Python 3.9.9**: Ensure this version is installed on your system. If not, download it from [Python's official website](https://www.python.org/downloads/). 2. **Install Pip**: - Download Pip: `curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py` + - Install Pip: `python3 get-pip.py` + - Verify installation: `pip3 -V` - - **Ubuntu Note**: If you encounter a `ModuleNotFoundError`, install `python3-distutils` using `sudo apt-get install python3-distutils`. + + - **Ubuntu Note**: If you encounter a `ModuleNotFoundError` , install `python3-distutils` using `sudo apt-get install python3-distutils` . ### Mac OS X + 1. Create a directory for environments: `mkdir ~/python-envs`. 2. Create a new environment: `python3 -m venv ~/python-envs/sample-env`. 3. Activate the environment: `source ~/python-envs/sample-env/bin/activate`. 4. Update Pip and setuptools: - `pip3 install --upgrade pip` - - `pip3 install --upgrade setuptools`. + + - `pip3 install --upgrade setuptools` . 5. Deactivate with `deactivate` (reactivate using the same command). ### Ubuntu + 1. Install the `python3-venv` package: `sudo apt-get install python3-venv`. 2. Follow the Mac OS X steps for environment creation and management. ### Windows -1. Create a directory for environments: `md python-envs`. -2. Create a new environment: `python -m venv python-envs\sample-env`. -3. Activate the environment: `python-envs\sample-env\Scriptsctivate`. -4. Update Pip: `pip install --upgrade pip`. + +1. Create a directory for environments: `md python-envs` . +2. Create a new environment: `python -m venv python-envs\sample-env` . +3. Activate the environment: `python-envs\sample-env\Scriptsctivate` . +4. Update Pip: `pip install --upgrade pip` . 5. Deactivate with `deactivate` (reactivate using the same command). **Additional Notes for Windows Users**: -- Confirm Python version: `python --version`. -- Admin privileges may be required for Python installation. -- This guide is specific to Windows 10 with a 64-bit architecture. +* Confirm Python version: `python --version`. +* Admin privileges may be required for Python installation. +* This guide is specific to Windows 10 with a 64-bit architecture. ## Introduction to Conda Environments + _Anaconda_ (or simply Conda) is an open-source package management and environment management system that runs on Windows, macOS, and Linux. Conda environments are similar to Python virtual environments but they are managed with the Conda package manager. ## Setting Up Conda Environments ### Installing Conda + 1. Download and install Anaconda or Miniconda from [Conda's official website](https://www.anaconda.com/distribution/). 2. Open a terminal (or Anaconda Prompt on Windows) and check the Conda version: `conda --version`. ### Creating a Conda Environment + 1. Create a new environment: `conda create --name myenv` (replace `myenv` with your desired environment name). 2. Activate the environment: `conda activate myenv`. ### Managing Packages -- Install a package: `conda install numpy` (replace `numpy` with your desired package). -- Update a package: `conda update numpy`. -- List installed packages: `conda list`. + +* Install a package: `conda install numpy` (replace `numpy` with your desired package). +* Update a package: `conda update numpy`. +* List installed packages: `conda list`. ### Deactivating an Environment -- Deactivate with `conda deactivate`. + +* Deactivate with `conda deactivate`. ## Conclusion + Virtual Environments offer a robust solution for managing complex dependencies and are particularly useful for projects requiring a combination of Python and non-Python packages. This guide should help you get started with Conda or Python's Virtual Environments for using with Animal-AI. diff --git a/docs/configGuide/YAML-Config-Syntax.md b/docs/configGuide/YAML-Config-Syntax.md index 44b99972c..14449ccf8 100644 --- a/docs/configGuide/YAML-Config-Syntax.md +++ b/docs/configGuide/YAML-Config-Syntax.md @@ -1,6 +1,7 @@ # Detailed Arena Config Guide #### Table of Contents + 1. [Introduction](#introduction) 2. [Understanding YAML Syntax](#understanding-yaml-syntax) 2.1 [YAML Hierarchical Syntax](#yaml-hierarchical-syntax) @@ -22,10 +23,10 @@ Let's take a look at some examples to understand how to use the YAML syntax in Animal-AI to create custom arenas. We will start with some simple examples and then move on to more complex examples. - ### Understanding YAML Syntax #### YAML Hierarchical Syntax + ```YAML # note that the arena has a fixed size of 40x40, meaning the size of the arena does not change. # in later versions of Animal-AI, the arena size will be configurable. and be set dynamically. @@ -35,19 +36,21 @@ arenas: Items: ... # rest of configuration file... ``` + **Observations:** We can observe the following structure: -- `!ArenaConfig` is the root tag. -- `arenas` is the tag for the arenas. -- `0` is the tag for the first arena in the file. -- `!Arena` is the tag for the arena itself. -- `Items` is the tag for the objects to spawn in the arena. +* `!ArenaConfig` is the root tag. +* `arenas` is the tag for the arenas. +* `0` is the tag for the first arena in the file. +* `!Arena` is the tag for the arena itself. +* `Items` is the tag for the objects to spawn in the arena. -The `!ArenaConfig` tag is used to indicate that the following YAML file is an ArenaConfig file. The `arenas` tag is used to indicate that the following YAML file contains one or more arenas. The `0` tag indicates that the following arena is the first arena in the file, upto `n arenas`. The `!Arena` tag indicates that the following YAML file contains an arena. The `!` tag is used to indicate that the following YAML file is a custom class. In this case, the `!Arena` tag indicates that the following YAML file is an Arena file. The `!Arena` tag is followed by a list of parameters that are used to define the arena, with the objects to spawn for that particular arena only. Some arena parameters are applied locally, such as `t` (time limit) and `pass_mark` (more on this later), while others are applied globally (see below for an example). +The `!ArenaConfig` tag is used to indicate that the following YAML file is an ArenaConfig file. The `arenas` tag is used to indicate that the following YAML file contains one or more arenas. The `0` tag indicates that the following arena is the first arena in the file, upto `n arenas` . The `!Arena` tag indicates that the following YAML file contains an arena. The `!` tag is used to indicate that the following YAML file is a custom class. In this case, the `!Arena` tag indicates that the following YAML file is an Arena file. The `!Arena` tag is followed by a list of parameters that are used to define the arena, with the objects to spawn for that particular arena only. Some arena parameters are applied locally, such as `t` (time limit) and `pass_mark` (more on this later), while others are applied globally (see below for an example). #### YAML Hierarchical Syntax (Config/YAML Global Parameters) + ```YAML !ArenaConfig # Global Parameters that are optional to put here. @@ -60,18 +63,20 @@ arenas: ... # rest of configuration file... ``` + **Observations:** We can observe: -- The default values for the global parameters are as follows: `canChangePerspective: true`, `canResetEpisode: true`, `showNotification: false`, and `randomizeArenas: false`. +* The default values for the global parameters are as follows: `canChangePerspective: true`, `canResetEpisode: true`, `showNotification: false`, and `randomizeArenas: false`. Bear in mind that the global parameters are optional to define. If we do not define them, the default values are used. -- If we do not provide global parameters, the default values are used. For example, if we do not provide a value for `canChangePerspective`, the default value of `true` is used. -- If we provide a value for a global parameter, the value is applied to all arenas in the file. For example, if we set `canChangePerspective` to `false`, the agent will not be able to change its perspective in any of the arenas in the file. However, if we set `canChangePerspective` to `true`, the agent will be able to change its perspective in any of the arenas in the file. +* If we do not provide global parameters, the default values are used. For example, if we do not provide a value for `canChangePerspective`, the default value of `true` is used. +* If we provide a value for a global parameter, the value is applied to all arenas in the file. For example, if we set `canChangePerspective` to `false`, the agent will not be able to change its perspective in any of the arenas in the file. However, if we set `canChangePerspective` to `true`, the agent will be able to change its perspective in any of the arenas in the file. In the example above, the global parameters are defined before the arenas. These parameters are applied to all arenas in the file. Please note that these parameters are only applicable during `Play` mode, not agent `Training` mode. #### YAML Hierarchical Syntax (Arena/Item Local Parameters) + ```YAML !ArenaConfig arenas: @@ -98,11 +103,12 @@ arenas: skins: - "hedgehog" ``` + **Observations:** Regarding Arena and Item local parameters, we can observe respectively that: -- The _arena-specific_ parameters are only applicable to the arena in which they are defined. For example, if `t` is set to `250`, the time limit for the arena will be 250 seconds. However, if there are multiple arenas defined in the same YAML configuration file `t` is set to `500`, the time limit for the arena will be 500 seconds for that arena only. Please note that these parameters are applicable during `Play` and `Training` modes. Lastly, the properties of each "Item" is a local parameter specified for that particular object only. For example, if the `Agent` object is specified to have a `hedgehog` skin, only the `Agent` object will have a `hedgehog` skin for that particular arena. +* The _arena-specific_ parameters are only applicable to the arena in which they are defined. For example, if `t` is set to `250`, the time limit for the arena will be 250 seconds. However, if there are multiple arenas defined in the same YAML configuration file `t` is set to `500`, the time limit for the arena will be 500 seconds for that arena only. Please note that these parameters are applicable during `Play` and `Training` modes. Lastly, the properties of each "Item" is a local parameter specified for that particular object only. For example, if the `Agent` object is specified to have a `hedgehog` skin, only the `Agent` object will have a `hedgehog` skin for that particular arena. ```YAML !ArenaConfig @@ -134,7 +140,7 @@ arenas: - !Vector3 {x: 1000, y: 1000, z: 1000} ``` -- Moreover, we can observe that the `!Item` tag also contains local parameters of its own, which we define as _item-only_ local parameters. Such item-only local parameters are only applicable to the object in which they are defined. For example, if we define a `Wall` object twice in the same arena (as demonstrated in the above YAML snippet), the local parameters such as positions, sizes, colors, rotations etc. defined for the first `Wall` object will not apply to the second `Wall` object in the same arena. +* Moreover, we can observe that the `!Item` tag also contains local parameters of its own, which we define as _item-only_ local parameters. Such item-only local parameters are only applicable to the object in which they are defined. For example, if we define a `Wall` object twice in the same arena (as demonstrated in the above YAML snippet), the local parameters such as positions, sizes, colors, rotations etc. defined for the first `Wall` object will not apply to the second `Wall` object in the same arena. The syntax implemented allows for a high degree of flexibility in the creation of custom arenas where multiple objects of the same type can be defined with different properties for the same arena, without conflict. @@ -145,6 +151,7 @@ Let's now take a look at more complex examples to understand how to use the YAML Let's take a look at some examples to understand how to use the YAML syntax in Animal-AI to create custom arenas. #### EXAMPLE 1 - Standard Parameters & Randomisation + ```YAML !ArenaConfig arenas: @@ -177,21 +184,22 @@ arenas:

**Observations:** -- The number of parameters for `positions`, `rotations`, and `sizes` do not need to match. -- The environment will spawn `max(len(positions), len(rotations), len(sizes))` objects. -- Missing parameters are assigned randomly. For example, if `positions` is specified, but `sizes` are not, the environment will randomly assign sizes values to the objects. +* The number of parameters for `positions`, `rotations`, and `sizes` do not need to match. +* The environment will spawn `max(len(positions), len(rotations), len(sizes))` objects. +* Missing parameters are assigned randomly. For example, if `positions` is specified, but `sizes` are not, the environment will randomly assign sizes values to the objects. In this scenario, the objects will spawn in the following order: -- A pink Cube will appear at coordinates `[10, 10]` on the ground. It will have a rotation of `45` degrees and its size will be random along the `x` and `z` axes, with a fixed size of `y=5`. +* A pink Cube will appear at coordinates `[10, 10]` on the ground. It will have a rotation of `45` degrees and its size will be random along the `x` and `z` axes, with a fixed size of `y=5`. Another Cube will be placed on the ground at a random x coordinate and z=30. This cube's rotation, size, and color will all be randomly determined. -- Three CylinderTunnel objects will spawn next, and each of these will be entirely random in terms of position, size, color, and rotation. -- A GoodGoal object will then appear, with all its attributes randomized. -- Finally, the agent will spawn in a random position and orientation if it is unspecified in the arena instance. This is an important point to note, as if the agent was specified, it would have priority over all other objects and would be spawned first, before any other object(s). +* Three CylinderTunnel objects will spawn next, and each of these will be entirely random in terms of position, size, color, and rotation. +* A GoodGoal object will then appear, with all its attributes randomized. +* Finally, the agent will spawn in a random position and orientation if it is unspecified in the arena instance. This is an important point to note, as if the agent was specified, it would have priority over all other objects and would be spawned first, before any other object(s). -  +  #### EXAMPLE 2 - Decay Goals / Size-Changing Goals + ```YAML !ArenaConfig arenas: @@ -252,19 +260,20 @@ arenas: **Observations:** -This example showcases various goals that undergo changes such as `decay`, `growth`, `shrinkage`, and `ripening` (anti-decay). Each Item in this setup includes certain parameters that are either irrelevant or used incorrectly. These 'red herring' parameters, while not utilized properly, do not impact the overall outcome or cause issues with the AAI environment. +This example showcases various goals that undergo changes such as `decay` , `growth` , `shrinkage` , and `ripening` (anti-decay). Each Item in this setup includes certain parameters that are either irrelevant or used incorrectly. These 'red herring' parameters, while not utilized properly, do not impact the overall outcome or cause issues with the AAI environment. In the above scenario: -- The `ShrinkGoal` and `GrowGoal` ignore the declared `sizes` parameter. Instead, their sizes change based on the initialValues and finalValues. -- For both `DecayGoal` and `AntiDecayGoal`, the size is determined by the larger of the `initialValue` or `finalValue`. -- Additionally, the reward for these goals transitions from the initial value to the final value over time. +* The `ShrinkGoal` and `GrowGoal` ignore the declared `sizes` parameter. Instead, their sizes change based on the initialValues and finalValues. +* For both `DecayGoal` and `AntiDecayGoal`, the size is determined by the larger of the `initialValue` or `finalValue`. +* Additionally, the reward for these goals transitions from the initial value to the final value over time. Interestingly, the ShrinkGoal includes a `symbolNames` parameter, which is typically reserved for `SignBoard` objects. This parameter is not applicable here and is therefore disregarded. -- Furthermore, an 'animal skin' feature is utilized in this example. Specifically, the Agent is configured to always appear with a 'hedgehog' skin. +* Furthermore, an 'animal skin' feature is utilized in this example. Specifically, the Agent is configured to always appear with a 'hedgehog' skin. -  +  #### EXAMPLE 3 - SignBoard (Preset Symbols) + ```YAML !ArenaConfig arenas: @@ -310,9 +319,10 @@ arenas: This example illustrates how to employ predefined symbols using the `symbolNames` parameter, which is exclusive to `SignBoard` objects. Each symbol in this list comes with a default color. However, these colors do not affect the texture of the symbol. Instead, the color of the SignBoard gameobject is determined by the `colors` parameter and only that. -  +  #### EXAMPLE 4 - SignBoard (Special Symbols) + ```YAML !ArenaConfig arenas: @@ -354,13 +364,14 @@ arenas: **Observations:** -This example demonstrates the use of *special codes* to generate black-and-white pixel grids to use as symbols. `0` -> black, `1` -> white, and `*` is a 'joker' character that chooses to output black or white at random. The dimensions of the grid are given by the `/` character - each row between `/`s must be of the same size for the code to be valid. +This example demonstrates the use of *special codes* to generate black-and-white pixel grids to use as symbols. `0` -> black, `1` -> white, and `*` is a 'joker' character that chooses to output black or white at random. The dimensions of the grid are given by the `/` character - each row between `/` s must be of the same size for the code to be valid. -Fully-random grids can be generated using the code `"MxN"`, where `M` and `N` are the grid width and height dimensions respectively. For example, `"5x3"` will generate a 5x3 grid. +Fully-random grids can be generated using the code `"MxN"` , where `M` and `N` are the grid width and height dimensions respectively. For example, `"5x3"` will generate a 5x3 grid. -  +  #### EXAMPLE 5 - SpawnerButton (Interactive Objects) + ```YAML !ArenaConfig arenas: @@ -399,23 +410,24 @@ arenas: **Observations:** -- The `SpawnerButton` object is an interactive object, meaning that it can be interacted with by the player/agent. -- The `SpawnerButton` object is a modular object, meaning that it is made up of multiple modules. In this case, the `SpawnerButton` object is made up of 2 modules. +* The `SpawnerButton` object is an interactive object, meaning that it can be interacted with by the player/agent. +* The `SpawnerButton` object is a modular object, meaning that it is made up of multiple modules. In this case, the `SpawnerButton` object is made up of 2 modules. In the above example, the `SpawnerButton` object is used to spawn rewards upon interaction with the button (by 'colliding' with the gameobject) by the player/agent. We can observe that the `SpawnerButton` object has a number of parameters that are used to define the object's behaviour. These parameters are as follows: -- `moveDurations`: The duration of the movement of the button when it is pressed. -- `resetDurations`: The duration of the movement of the button when it is reset. -- `rewardNames`: The list of rewards that can be spawned. -- `rewardWeights`: The weights of each reward in the rewards list. -As an added feature, the weights can be used to control the probability of spawning each reward. For example, if the weights are `[100, 0, 0]`, the probability of spawning the first reward is 100%, while the probability of spawning the second and third rewards are 0%. If the weights are `[50, 50, 0]`, the probability of spawning the first and second rewards are 50%, while the probability of spawning the third reward is 0%. If the weights are `[33, 33, 33]`, the probability of spawning each reward is 33%. -- `spawnProbability`: The probability of spawning the reward upon interaction with the SpawnerButton. This parameter is used in conjunction with the `rewardWeights` parameter. Essentially, it controls the overall probability of spawning _a_ reward upon interaction with the SpawnerButton. For example, if you have the `rewardWeights` set to `[100, 100, 100]` but the `spawnProbability` set to `0.5`, the probability of spawning a reward is 50% at each interaction. Conversely, if you have the `rewardWeights` set to `[100, 100, 100]` but the `spawnProbability` set to `0.0`, the probability of spawning a reward is 0% at each interaction, meaning no reward will ever be spawned. -- `maxRewardCounts`: The maximum number of times each reward can be spawned. A value of -1 means no limit to the number of times the reward can be spawned per episode. -- `rewardSpawnPos`: The position where the reward will be spawned. If left unspecified, the reward will be spawned randomly within the arena. +* `moveDurations`: The duration of the movement of the button when it is pressed. +* `resetDurations`: The duration of the movement of the button when it is reset. +* `rewardNames`: The list of rewards that can be spawned. +* `rewardWeights`: The weights of each reward in the rewards list. +As an added feature, the weights can be used to control the probability of spawning each reward. For example, if the weights are `[100, 0, 0]` , the probability of spawning the first reward is 100%, while the probability of spawning the second and third rewards are 0%. If the weights are `[50, 50, 0]` , the probability of spawning the first and second rewards are 50%, while the probability of spawning the third reward is 0%. If the weights are `[33, 33, 33]` , the probability of spawning each reward is 33%. +* `spawnProbability`: The probability of spawning the reward upon interaction with the SpawnerButton. This parameter is used in conjunction with the `rewardWeights` parameter. Essentially, it controls the overall probability of spawning _a_ reward upon interaction with the SpawnerButton. For example, if you have the `rewardWeights` set to `[100, 100, 100]` but the `spawnProbability` set to `0.5`, the probability of spawning a reward is 50% at each interaction. Conversely, if you have the `rewardWeights` set to `[100, 100, 100]` but the `spawnProbability` set to `0.0`, the probability of spawning a reward is 0% at each interaction, meaning no reward will ever be spawned. +* `maxRewardCounts`: The maximum number of times each reward can be spawned. A value of -1 means no limit to the number of times the reward can be spawned per episode. +* `rewardSpawnPos`: The position where the reward will be spawned. If left unspecified, the reward will be spawned randomly within the arena. -  +  #### EXAMPLE 6 - Multiple Arenas (Randomisation) + ```YAML !ArenaConfig randomizeArenas: true # Here, we set randomizeArenas to true, which means that the arenas will be randomized upon play. Note that this is not applicable to training mode. @@ -453,9 +465,11 @@ arenas: + +

Arena 0

Arena 1

@@ -463,14 +477,15 @@ arenas: We can observe that: -- If we set `randomizeArenas` to `false`, the arenas will not be randomized upon play. Instead, the arenas will be played in the order in which they are defined in the file. For example, if we set `randomizeArenas` to `false`, the first arena will be played first, followed by the second arena. -- If we set `randomizeArenas` to `true`, the arenas will be randomized upon play and played in a random order. Regardless of the order in which the arenas are defined in the file, the arenas will be recycled (start from the beginning again) when every arena has been cycled through once. +* If we set `randomizeArenas` to `false`, the arenas will not be randomized upon play. Instead, the arenas will be played in the order in which they are defined in the file. For example, if we set `randomizeArenas` to `false`, the first arena will be played first, followed by the second arena. +* If we set `randomizeArenas` to `true`, the arenas will be randomized upon play and played in a random order. Regardless of the order in which the arenas are defined in the file, the arenas will be recycled (start from the beginning again) when every arena has been cycled through once. -In this example, we define two arenas. However, we set `randomizeArenas` to `true`, which means that the arenas will be randomized upon play. Note that this is not applicable to training mode. This means that the order in which the arenas are defined does not matter, as the arenas will be randomized upon play. Please note that the `randomizeArenas` parameter is only applicable to the arenas in the file, not the objects within the arenas. +In this example, we define two arenas. However, we set `randomizeArenas` to `true` , which means that the arenas will be randomized upon play. Note that this is not applicable to training mode. This means that the order in which the arenas are defined does not matter, as the arenas will be randomized upon play. Please note that the `randomizeArenas` parameter is only applicable to the arenas in the file, not the objects within the arenas. -  +  #### EXAMPLE 7 - Arena 'Blackouts' + ```YAML !ArenaConfig arenas: @@ -496,9 +511,9 @@ arenas: We can observe that: -- The `blackouts` parameter is used to define the blackout zones for the arena. The `blackouts` parameter is a list of frames at which the arena will be blacked out. For example, if we set `blackouts` to `[10, 43, 50, 20]`, the arena will be blacked out at frames 10, 43, 50 and 20. This means the player/agent will be virtually blind at these frames (no light will be emitted to the arena). -- Additionally, if we set `blackouts` to `[-20]`, the arena will blackout every 20 frames (because we placed the '-' indicating repeat). -- The blackout has no effect on any other aspect of the agent or the arena. For example, the agent will still be able to move around the arena, and the objects in the arena will still be visible to the agent. _RayCasting_ will still work. +* The `blackouts` parameter is used to define the blackout zones for the arena. The `blackouts` parameter is a list of frames at which the arena will be blacked out. For example, if we set `blackouts` to `[10, 43, 50, 20]`, the arena will be blacked out at frames 10, 43, 50 and 20. This means the player/agent will be virtually blind at these frames (no light will be emitted to the arena). +* Additionally, if we set `blackouts` to `[-20]`, the arena will blackout every 20 frames (because we placed the '-' indicating repeat). +* The blackout has no effect on any other aspect of the agent or the arena. For example, the agent will still be able to move around the arena, and the objects in the arena will still be visible to the agent. _RayCasting_ will still work. ### Conclusion @@ -508,4 +523,4 @@ We hope that this guide has helped you understand how to use the YAML syntax in For more information on how YAML works, please refer to the [YAML documentation](https://yaml.org/spec/1.2/spec.html). If you are still unsure about how to use the YAML syntax, please refer to the [Background-YAML](/docs/Background-YAML.md) guide for a closer look into how YAML is used. ---- \ No newline at end of file +--- diff --git a/docs/gettingStarted/Arena-Environment-Guide.md b/docs/gettingStarted/Arena-Environment-Guide.md index f97499599..667140015 100644 --- a/docs/gettingStarted/Arena-Environment-Guide.md +++ b/docs/gettingStarted/Arena-Environment-Guide.md @@ -1,61 +1,65 @@ # Arena Environment Guide #### Table of Contents -- [Introduction](#introduction) -- [The Arena](#the-arena) -- [The Agent](#the-agent) - - [Agent HUD (Heads-Up Display)](#agent-hud-heads-up-display) - - [Arena/Agent Limitations](#arenaagent-limitations) - - [Agent Properties](#agent-properties) - - [Complex Agent Properties (ML-Agents / Training)](#complex-agent-properties-ml-agents--training) -- [GameObjects](#gameobjects) - - [Unique/Special Object Parameters](#uniquespecial-object-parameters) - - [Agent-Specific Parameters](#agent-specific-parameters) - - [Goal-Related Parameters](#goal-related-parameters) - - [Spawner Parameters](#spawner-parameters) - - [SignBoard Parameters](#signboard-parameters) -- [Blackouts](#blackouts) -- [Rules and Notes for Arena Configurations](#rules-and-notes-for-arena-configurations) - - [Spawning GameObjects](#spawning-gameobjects) - - [Configuration File Values](#configuration-file-values) + +* [Introduction](#introduction) +* [The Arena](#the-arena) +* [The Agent](#the-agent) + + [Agent HUD (Heads-Up Display)](#agent-hud-heads-up-display) + + [Arena/Agent Limitations](#arenaagent-limitations) + + [Agent Properties](#agent-properties) + + [Complex Agent Properties (ML-Agents / Training)](#complex-agent-properties-ml-agents--training) +* [GameObjects](#gameobjects) + + [Unique/Special Object Parameters](#uniquespecial-object-parameters) + + [Agent-Specific Parameters](#agent-specific-parameters) + + [Goal-Related Parameters](#goal-related-parameters) + + [Spawner Parameters](#spawner-parameters) + + [SignBoard Parameters](#signboard-parameters) +* [Blackouts](#blackouts) +* [Rules and Notes for Arena Configurations](#rules-and-notes-for-arena-configurations) + + [Spawning GameObjects](#spawning-gameobjects) + + [Configuration File Values](#configuration-file-values) ## Introduction This guide will help you understand the structure of the physical Arena Environment as developed in Unity. We will explain the various functions of the arena environment, and their purposes and uses. We will also outline the various parameters that can be used to configure the arena environment, and how to use them. Please see the [YAML Config Syntax](/docs/configGuide/YAML-Config-Syntax.md) guide for a detailed explanation of the syntax used in the configuration files for additional information. Be aware that this guide is not a comprehensive guide to Unity, and assumes that you have a basic understanding of the Unity Engine. If you are unfamiliar with Unity, please refer to the [Background - Unity](/docs/Background-Unity.md) guide for a brief overview of the Unity Engine as well as relevant useful links. - ## The Arena + + + +

2D view of the Arena

First-person view of agent

Full view of arena

Close-up of arena ground

Third Persion view of one of the agent skins

Side view of walls

-Each **episode** (a single run) contains an _arena_ environment. Currently, an arena can only support a single agent (with spherical animal skins - _hedgehog_, _pig_, or _panda_). It is currently a square of fixed size `40x40`, meaning the size of the arena is immutable, with the origin of the arena is set to `(0,0)`. You can provide coordinates for objects in the range `[0,40]x[0,40]` as floats. +Each **episode** (a single run) contains an _arena_ environment. Currently, an arena can only support a single agent (with spherical animal skins - _hedgehog_, _pig_, or _panda_). It is currently a square of fixed size `40x40` , meaning the size of the arena is immutable, with the origin of the arena is set to `(0,0)` . You can provide coordinates for objects in the range `[0,40]x[0,40]` as floats. The default arena is made up of a set of gameobjects, which itself is contained in a _Unity Scene_, which are as follows: -- **Walls**: The walls of the arena, which are 10 units high and 40 units long. The walls are made up of 4 gameobjects, one for each wall, which are named `Wall1`, `Wall2`, `Wall3`, and `Wall4`, each with a set of childobjects called `fences`, which contain the textures for the walls. The walls are all children of the `Walls` gameobject, which is itself a child of the `Arena` gameobject. -- **Ground**: The ground of the arena, which is 40 units long and 40 units wide. The ground is a child of the `Arena` gameobject. -- **Lights**: The lights of the arena, which are 4 spotlights, one for each corner of the arena. The lights are all children of the `Lights` gameobject, which is a child of the `Arena` gameobject. -- **SpawnArea**: The spawn gameobject responsible for spawning objects defined in the configuration file, which is a child of the `Arena` gameobject. This gameobject essentially controls the size of the spawn area, currently set to within the bounds of the walls of the arena. -- **Agent**: The agent, which is a child of the `Arena` gameobject, must be spawned in every arena. +* **Walls**: The walls of the arena, which are 10 units high and 40 units long. The walls are made up of 4 gameobjects, one for each wall, which are named `Wall1`, `Wall2`, `Wall3`, and `Wall4`, each with a set of childobjects called `fences`, which contain the textures for the walls. The walls are all children of the `Walls` gameobject, which is itself a child of the `Arena` gameobject. +* **Ground**: The ground of the arena, which is 40 units long and 40 units wide. The ground is a child of the `Arena` gameobject. +* **Lights**: The lights of the arena, which are 4 spotlights, one for each corner of the arena. The lights are all children of the `Lights` gameobject, which is a child of the `Arena` gameobject. +* **SpawnArea**: The spawn gameobject responsible for spawning objects defined in the configuration file, which is a child of the `Arena` gameobject. This gameobject essentially controls the size of the spawn area, currently set to within the bounds of the walls of the arena. +* **Agent**: The agent, which is a child of the `Arena` gameobject, must be spawned in every arena.

-In the above picture with the agent on the ground in the center of the environment its coordinates are `(20, 0, 20)`. Below is a sample configuration file for the default arena as shown above: +In the above picture with the agent on the ground in the center of the environment its coordinates are `(20, 0, 20)` . Below is a sample configuration file for the default arena as shown above: ```YAML !ArenaConfig @@ -74,15 +78,16 @@ arenas: - !Vector3 {x: 20, y: 0, z: 20} rotations: [0] ``` -- `n: !Arena` an `int`, denotes the unique arena number, which is used to identify the arena in the configuration file. The first arena must start with `0`, upto `n`, where `n` is the number of arenas defined in a single configuration file. -- `t` an `int`, defines the length of an episode which can change from one episode to the other. A value of `0` means that the episode will not terminate until a reward has been collected (setting `t=0` and having no reward will lead to an infinite episode). This value is converted into a decay rate for the health of the agent. A `t` of 100 means that the agent's health will decay to 0, and the episode will end, after 100 time steps. -- `pass_mark` an `int`, defines the reward threshold that should constitute a ‘pass’ in the enviroment. Leaving this parameter undefined leads to the default value of 0, whereby any reward value obtained by the Agent results in a pass. This parameter also determines the notifications that players receive at the end of an episode. If used, this parameter should be defined with consideration to the reward size that can feasibly be obtained by the agent in each configuration file. -- `canChangePerspective` a `bool`, defines whether the agent can change its camera perspective during an episode (first-person, third-person or eagle-view). If set to `false`, the agent will be unable to change its camera perspective during an episode by pressing the C button on their keyboards, which will cycle through the cameras attached to the Agent in-gasme. If set to `true`, the agent will be able to change its perspective during an episode. This parameter is set to `true` by default. -- `randomizeArenas` a `bool`, defines whether the arena will be randomized between episodes. If set to `true`, the arena will be randomized between the defined Arenas in the configuration file. If set to `false`, the order to which the arenas are spawned are sequential and top-to-bottom as specified in the configuration file. This parameter is set to `false` by default. -- `showNotification` a `bool`, defines whether the player will receive a notification at the end of an episode. If set to `true`, the player will be shown a notification at the end of an episode for approximately 2.5 seconds, then move on to the next episode (arena). If set to `false`, the agent will not receive a notification at the end of an episode and episode-to-episode termination is back-to-back. This parameter is set to `false` by default. -- `blackouts` a `list`, defines the frames at which the lights are on or off during an episode. If omitted, the lights will be on for the entire episode. For more information on blackouts, [see here](#blackouts) -**N.B:** These parameters are optional (except `t` and `pass_mark`) and can be omitted from the configuration file. If omitted, the default values will be used, which are explained in detail in our [YAML Config Syntax](/docs/configGuide/YAML-Config-Syntax.md) guide. +* `n: !Arena` an `int`, denotes the unique arena number, which is used to identify the arena in the configuration file. The first arena must start with `0`, upto `n`, where `n` is the number of arenas defined in a single configuration file. +* `t` an `int`, defines the length of an episode which can change from one episode to the other. A value of `0` means that the episode will not terminate until a reward has been collected (setting `t=0` and having no reward will lead to an infinite episode). This value is converted into a decay rate for the health of the agent. A `t` of 100 means that the agent's health will decay to 0, and the episode will end, after 100 time steps. +* `pass_mark` an `int`, defines the reward threshold that should constitute a ‘pass’ in the enviroment. Leaving this parameter undefined leads to the default value of 0, whereby any reward value obtained by the Agent results in a pass. This parameter also determines the notifications that players receive at the end of an episode. If used, this parameter should be defined with consideration to the reward size that can feasibly be obtained by the agent in each configuration file. +* `canChangePerspective` a `bool`, defines whether the agent can change its camera perspective during an episode (first-person, third-person or eagle-view). If set to `false`, the agent will be unable to change its camera perspective during an episode by pressing the C button on their keyboards, which will cycle through the cameras attached to the Agent in-gasme. If set to `true`, the agent will be able to change its perspective during an episode. This parameter is set to `true` by default. +* `randomizeArenas` a `bool`, defines whether the arena will be randomized between episodes. If set to `true`, the arena will be randomized between the defined Arenas in the configuration file. If set to `false`, the order to which the arenas are spawned are sequential and top-to-bottom as specified in the configuration file. This parameter is set to `false` by default. +* `showNotification` a `bool`, defines whether the player will receive a notification at the end of an episode. If set to `true`, the player will be shown a notification at the end of an episode for approximately 2.5 seconds, then move on to the next episode (arena). If set to `false`, the agent will not receive a notification at the end of an episode and episode-to-episode termination is back-to-back. This parameter is set to `false` by default. +* `blackouts` a `list`, defines the frames at which the lights are on or off during an episode. If omitted, the lights will be on for the entire episode. For more information on blackouts, [see here](#blackouts) + +**N. B:** These parameters are optional (except `t` and `pass_mark` ) and can be omitted from the configuration file. If omitted, the default values will be used, which are explained in detail in our [YAML Config Syntax](/docs/configGuide/YAML-Config-Syntax.md) guide. ## The Agent @@ -90,34 +95,44 @@ The agent is the main character in the arena, for playing and training. It is a The controls are as follows: -- `W` - move forward -- `A` - move left -- `S` - move backward -- `D` - move right -- `C` - change camera perspective (first-person, third-person, eagle-view, only if `canChangePerspective` is `true`) -- `R` - reset the arena (cycles to the next episode if `canResetEpisode` is `true`) -- `Q` - quit (exits the application upon press) +* `W` - move forward +* `A` - move left +* `S` - move backward +* `D` - move right +* `C` - change camera perspective (first-person, third-person, eagle-view, only if `canChangePerspective` is `true`) +* `R` - reset the arena (cycles to the next episode if `canResetEpisode` is `true`) +* `Q` - quit (exits the application upon press) + +

Hedgehog

Panda

Pig

### Agent HUD (Heads-Up Display) The agent has a HUD (_Heads-Up Display_) that displays the following information per episode by default: -- **Health**: The health of the agent, which is a value between `0` and `1`. The agent's health decays over time, and is reset to `1` when the agent collects a reward. The agent's health is displayed as a blue-green-red bar at the bottom of the HUD. -- **Reward**: The reward collected by the agent, which is a value between `-1` and `1`. The agent's reward is displayed as a text at the top of the HUD, which is updated in real-time as the agent collects rewards. It contains the previous episode's reward, as well as the current episode's reward, respectively. -- **Episode**: The episode number, which is the number of episodes the agent has played in the arena. The episode number is displayed as a white number at the top of the HUD. **(This is a feature to be added in the future.)** -- **Notification**: The notification displayed to the agent at the end of an episode. The notification is currently a combination of color gradients and a short animated GIF. This is an optional HUD and only appears if `showNotification` parameter is set to `true` in the configuration file. _Note that this feature has no effect on training, and is only used for playing the game._ +* **Health**: The health of the agent, which is a value between `0` and `1`. The agent's health decays over time, and is reset to `1` when the agent collects a reward. The agent's health is displayed as a blue-green-red bar at the bottom of the HUD. +* **Reward**: The reward collected by the agent, which is a value between `-1` and `1`. The agent's reward is displayed as a text at the top of the HUD, which is updated in real-time as the agent collects rewards. It contains the previous episode's reward, as well as the current episode's reward, respectively. +* **Episode**: The episode number, which is the number of episodes the agent has played in the arena. The episode number is displayed as a white number at the top of the HUD. **(This is a feature to be added in the future.)** +* **Notification**: The notification displayed to the agent at the end of an episode. The notification is currently a combination of color gradients and a short animated GIF. This is an optional HUD and only appears if `showNotification` parameter is set to `true` in the configuration file. _Note that this feature has no effect on training, and is only used for playing the game._ -| ![](../../docs/figs/Agent-HUD/agent-health.png) | ![](../../docs/figs/Agent-HUD/agent-REWARD.png) | +| + +![](../../docs/figs/Agent-HUD/agent-health.png) | ![](../../docs/figs/Agent-HUD/agent-REWARD.png) + + | |---|---| -| ![](../../docs/figs/Agent-HUD/notification-bad.png) | ![](../../docs/figs/Agent-HUD/notification-good.png) | +| + +![](../../docs/figs/Agent-HUD/notification-bad.png) | ![](../../docs/figs/Agent-HUD/notification-good.png) + + | ### Arena/Agent Limitations @@ -135,62 +150,61 @@ The agent has a Physics component attached to it, which allows it to interact wi _Essentially, you can expect that the Physics of Unity game engine are modelled to mimic our three-dimensional reality as much as possible_. The agent has the following properties: -- **Scale**: The scale of the agent, which is set to `1x1x1` by default. -- **Mass**: The mass of the agent, which is set to `100` by default. -- **Drag**: The drag of the agent, which is set to `1.2` by default. -- **Angular Drag**: The angular drag of the agent, which is set to `0.05` by default. -- **Gravity**: Enabled for the agent (and for all other objects for that matter), which means that the Agent will fall to the ground when spawned if it's `y` coordinate `> 0`. -- **Speed**: The speed of the agent, which is set to `30` by code. This is the speed at which the agent moves when the `W`, `A`, `S`, and `D` keys are pressed. Note that the speed of the agent is affected by the `drag` and `angular drag` properties, which means that the agent will slow down over time if the keys are not pressed. -- **Rotation Speed**: The rotation speed of the agent, which is set to `100` by code. This is the speed at which the agent rotates when the `A` and `D` keys are pressed. Rotation speed is unaffected by the `drag` and `angular drag` properties. -- **Rotation Angle**: The angle of rotation of the agent, which is `0.25` by code. This property is used to dictate the angle of rotating the agent when the `A` and `D` keys are pressed. Rotation angle is unaffected by the `drag` and `angular drag` properties. +* **Scale**: The scale of the agent, which is set to `1x1x1` by default. +* **Mass**: The mass of the agent, which is set to `100` by default. +* **Drag**: The drag of the agent, which is set to `1.2` by default. +* **Angular Drag**: The angular drag of the agent, which is set to `0.05` by default. +* **Gravity**: Enabled for the agent (and for all other objects for that matter), which means that the Agent will fall to the ground when spawned if it's `y` coordinate `> 0`. +* **Speed**: The speed of the agent, which is set to `30` by code. This is the speed at which the agent moves when the `W`, `A`, `S`, and `D` keys are pressed. Note that the speed of the agent is affected by the `drag` and `angular drag` properties, which means that the agent will slow down over time if the keys are not pressed. +* **Rotation Speed**: The rotation speed of the agent, which is set to `100` by code. This is the speed at which the agent rotates when the `A` and `D` keys are pressed. Rotation speed is unaffected by the `drag` and `angular drag` properties. +* **Rotation Angle**: The angle of rotation of the agent, which is `0.25` by code. This property is used to dictate the angle of rotating the agent when the `A` and `D` keys are pressed. Rotation angle is unaffected by the `drag` and `angular drag` properties. ### Complex Agent Properties (ML-Agents / Training) Please refer to ML-Agents for documentation for a full breakdown of the Agent's Properties: [ML-Agent's Documentation](https://github.com/Unity-Technologies/ml-agents/blob/f442194297f878a84eb60c04eccf7662cbc9ff60/docs/Learning-Environment-Design-Agents.md#L467). Here is a brief overview of the properties: -- **Behavior Parameters** +* **Behavior Parameters** This component dictates the policy the agent will follow and includes several sub-settings: -- **Behavior Name** +* **Behavior Name** A unique identifier for the agent's behavior. Agents with the same name share the same policy. -- **Vector Observation** - - **Space Size**: Defines the length of the vector observation for the agent. - - **Stacked Vectors**: Number of previous vector observations to be stacked together. +* **Vector Observation** + + **Space Size**: Defines the length of the vector observation for the agent. + + **Stacked Vectors**: Number of previous vector observations to be stacked together. -- **Actions** - - **Continuous Actions**: Number of concurrent continuous actions the agent can take. - - **Discrete Branches**: An array defining multiple concurrent discrete actions. +* **Actions** + + **Continuous Actions**: Number of concurrent continuous actions the agent can take. + + **Discrete Branches**: An array defining multiple concurrent discrete actions. -- **Model** +* **Model** Refers to the neural network model used for decision-making. -- **Inference Device** +* **Inference Device** Determines whether to use CPU or GPU during inference. -- **Behavior Type** +* **Behavior Type** Sets the mode of operation for the agent: - - **Default**: Trains if connected to a Python trainer; otherwise, performs inference. - - **Heuristic Only**: Uses a heuristic method for decision-making. - - **Inference Only**: Always uses its trained model for decision-making. + + **Default**: Trains if connected to a Python trainer; otherwise, performs inference. + + **Heuristic Only**: Uses a heuristic method for decision-making. + + **Inference Only**: Always uses its trained model for decision-making. -- **Max Step** +* **Max Step** Defines the maximum number of steps an agent can take in an episode. Currently, this is not implemented as we have the Health of the agent as the episode termination condition, which is custom to our environment. - ## GameObjects All objects can be configured in the same manner, using a set of parameters for each `item` Unity gameobject: -- `name`: the name of the object you want to spawn, which must match the object name specified in [Arena Object Definitions](/docs/Arena-Object-Definitions.md). You can spawn the same object as many times as required, but they must be in different positions from one another. -- `positions`: a list of `Vector3` positions within the arena where you want to spawn items, if the list is empty the position will be sampled randomly in the arena. Any position vector set to -1 will spawn randomly. Also note that Animal-AI enforces a constraint where objects cannot spawn within 0.1 units of each other, so if you try to spawn objects too close together there will be object collision clashes and the objects will not spawn. -- `sizes`: a list of `Vector3` sizes, if the list is empty the size will be sampled randomly (within preset bounds for that particular object). You can set any size to -1 to spawn randomly along that vector only. -- `rotations`: a list of `float` in the range `[0,360]`, if the list is empty the rotation is sampled randomly. Default is 0 degrees. -- `colors`: a list of `RGB` values (integers in the range `[0,255]`), if the list is empty the color is sampled randomly. Note that not all objects can have their colour changed and for those (e.g. transparent objects) this value will be ignored. +* `name`: the name of the object you want to spawn, which must match the object name specified in [Arena Object Definitions](/docs/Arena-Object-Definitions.md). You can spawn the same object as many times as required, but they must be in different positions from one another. +* `positions`: a list of `Vector3` positions within the arena where you want to spawn items, if the list is empty the position will be sampled randomly in the arena. Any position vector set to -1 will spawn randomly. Also note that Animal-AI enforces a constraint where objects cannot spawn within 0.1 units of each other, so if you try to spawn objects too close together there will be object collision clashes and the objects will not spawn. +* `sizes`: a list of `Vector3` sizes, if the list is empty the size will be sampled randomly (within preset bounds for that particular object). You can set any size to -1 to spawn randomly along that vector only. +* `rotations`: a list of `float` in the range `[0,360]`, if the list is empty the rotation is sampled randomly. Default is 0 degrees. +* `colors`: a list of `RGB` values (integers in the range `[0,255]`), if the list is empty the color is sampled randomly. Note that not all objects can have their colour changed and for those (e.g. transparent objects) this value will be ignored. -**N.B:** Any of these parameters can be omitted in the configuration files per object, in which case the omitted fields are automatically randomized. However, we advise that you specify these parameters as this will allow you to have a more controlled environment in your arena(s). Any Vector3 that contains a -1 for any of its dimensions will spawn that dimension randomly `(e.g. x: -1, y: 10, z: 2 --> will spawn the object randomly along the x axis)`. Finally, some objects have specific parameters applicable only to them, which are described in the [Unique/Special Objects](#uniquespecial-object-parameters). +**N. B:** Any of these parameters can be omitted in the configuration files per object, in which case the omitted fields are automatically randomized. However, we advise that you specify these parameters as this will allow you to have a more controlled environment in your arena(s). Any Vector3 that contains a -1 for any of its dimensions will spawn that dimension randomly `(e.g. x: -1, y: 10, z: 2 --> will spawn the object randomly along the x axis)` . Finally, some objects have specific parameters applicable only to them, which are described in the [Unique/Special Objects](#uniquespecial-object-parameters). -**All value ranges for the above fields can be found in [Arena Object Definitions](/docs/Arena-Object-Definitions.md)**. If you go above or below the range for size it will automatically be set to the max or min respectively. If you try to spawn objects outside the arena (i.e. with a configuration like this: `x = 41, z = 41`) or overlapping with another object with very close spawn positions, then that object will not be spawned. Objects are placed in the order defined such that the second overlapping object is the one that does not spawn. +**All value ranges for the above fields can be found in [Arena Object Definitions](/docs/Arena-Object-Definitions.md)**. If you go above or below the range for size it will automatically be set to the max or min respectively. If you try to spawn objects outside the arena (i.e. with a configuration like this: `x = 41, z = 41` ) or overlapping with another object with very close spawn positions, then that object will not be spawned. Objects are placed in the order defined such that the second overlapping object is the one that does not spawn. ## Unique/Special Object Parameters @@ -198,85 +212,87 @@ Some objects have unique/special parameters that only apply to them or a select ### Agent-Specific Parameters -- **Skins**: +* **Skins**: List of animal skins for the agent model. - - **Applies to:** Agent - - **Default:** "random" (any animal from the list) - - **Options:** "panda", "pig", "hedgehog", "random", etc. + + **Applies to:** Agent + + **Default:** "random" (any animal from the list) + + **Options:** "panda", "pig", "hedgehog", "random", etc. -- **Frozen Agent Delays**: +* **Frozen Agent Delays**: Time (in frames) the agent is frozen at the start of an episode. - - **Applies to:** Agent - - **Default:** 0 (no delay), n (delay of n frames) + + **Applies to:** Agent + + **Default:** 0 (no delay), n (delay of n frames) ### Goal-Related Parameters -- **Delays**: + +* **Delays**: Time delay before special behavior initiation. - - **Applies to:** DecayGoal, AntiDecayGoal, GrowGoal, ShrinkGoal, SpawnerTree, SpawnerDispenser, SpawnerContainer - - **Default:** 0 + + **Applies to:** DecayGoal, AntiDecayGoal, GrowGoal, ShrinkGoal, SpawnerTree, SpawnerDispenser, SpawnerContainer + + **Default:** 0 -- **Initial Values**: +* **Initial Values**: Starting reward/size values. - - **Applies to:** DecayGoal, AntiDecayGoal, GrowGoal, ShrinkGoal, SpawnerTree - - **Default:** Varies by goal type + + **Applies to:** DecayGoal, AntiDecayGoal, GrowGoal, ShrinkGoal, SpawnerTree + + **Default:** Varies by goal type -- **Final Values**: +* **Final Values**: Ending reward/size values. - - **Applies to:** DecayGoal, AntiDecayGoal, GrowGoal, ShrinkGoal, SpawnerTree - - **Default:** Varies by goal type + + **Applies to:** DecayGoal, AntiDecayGoal, GrowGoal, ShrinkGoal, SpawnerTree + + **Default:** Varies by goal type -- **Change Rates**: +* **Change Rates**: Rate at which reward/size changes. - - **Applies to:** DecayGoal, AntiDecayGoal, GrowGoal, ShrinkGoal - - **Default:** 0.005 (negative for decaying/shrinking) + + **Applies to:** DecayGoal, AntiDecayGoal, GrowGoal, ShrinkGoal + + **Default:** 0.005 (negative for decaying/shrinking) ### Spawner Parameters -- **Spawn Counts**: + +* **Spawn Counts**: Number of goals spawned. - - **Applies to:** SpawnerTree, SpawnerDispenser, SpawnerContainer - - **Default:** -1 (infinite) + + **Applies to:** SpawnerTree, SpawnerDispenser, SpawnerContainer + + **Default:** -1 (infinite) -- **Spawn Colors**: +* **Spawn Colors**: Color of spawned objects. - - **Applies to:** SpawnerTree, SpawnerDispenser, SpawnerContainer - - **Default:** Varies by spawner + + **Applies to:** SpawnerTree, SpawnerDispenser, SpawnerContainer + + **Default:** Varies by spawner -- **Times Between Spawns**: +* **Times Between Spawns**: Interval between spawns. - - **Applies to:** SpawnerTree, SpawnerDispenser, SpawnerContainer - - **Default:** 4.0 for trees, 1.5 otherwise + + **Applies to:** SpawnerTree, SpawnerDispenser, SpawnerContainer + + **Default:** 4.0 for trees, 1.5 otherwise -- **Ripen Times**: +* **Ripen Times**: Duration for goals to ripen in a tree. - - **Applies to:** SpawnerTree - - **Default:** 6.0 + + **Applies to:** SpawnerTree + + **Default:** 6.0 -- **Door Delays**: +* **Door Delays**: Time for a spawner's door to open. - - **Applies to:** SpawnerDispenser, SpawnerContainer - - **Default:** 10.0 + + **Applies to:** SpawnerDispenser, SpawnerContainer + + **Default:** 10.0 -- **Times Between Door Opens**: +* **Times Between Door Opens**: Interval for a spawner's door to open. - - **Applies to:** SpawnerDispenser, SpawnerContainer - - **Default:** -1 (stays open once opened) + + **Applies to:** SpawnerDispenser, SpawnerContainer + + **Default:** -1 (stays open once opened) ### SignBoard Parameters -- **Symbol Names**: - Names of symbols to be drawn. - - **Applies to:** SignBoard - - **Default:** "default" - - **Options:** "left-arrow", "right-arrow", etc. +* **Symbol Names**: + Names of symbols to be drawn. + + **Applies to:** SignBoard + + **Default:** "default" + + **Options:** "left-arrow", "right-arrow", etc. ## Blackouts _Blackouts_ define when the lights are on or off during an episode in each arena, resulting in a black screen/view in any camera angle. This is an optional parameter in the configuration file, and can be omitted if you don't want to use it. If omitted, the lights will be on for the entire episode. -- **Default Behavior**: Lights are on for the entire episode if no blackout parameter is provided. -- **List of Frames**: Provide a list like `[5,10,15,20,25]` to toggle lights. Lights will be off between frames 5-9, 15-19, etc., and on at other times. -- **Regular Intervals**: Use a negative number like `[-20]` to toggle lights every 20 frames. -- **Infinite Episodes**: For episodes with `t=0`, lights will follow the pattern indefinitely. +* **Default Behavior**: Lights are on for the entire episode if no blackout parameter is provided. +* **List of Frames**: Provide a list like `[5,10,15,20,25]` to toggle lights. Lights will be off between frames 5-9, 15-19, etc., and on at other times. +* **Regular Intervals**: Use a negative number like `[-20]` to toggle lights every 20 frames. +* **Infinite Episodes**: For episodes with `t=0`, lights will follow the pattern indefinitely. **Note**: With a list of frames, the lights will stay off after the last frame in the list for infinite episodes. @@ -288,23 +304,23 @@ When configuring an arena, follow these rules and be aware of certain behaviors: ### Spawning GameObjects -- **Non-Overlapping**: Objects can only spawn if they don't overlap with others. Overlapping attempts discard the latter object (i.e. the object that is spawned later in the configuration file). This is to avoid object collision issues during runtime. -- **Spawn Order**: Objects are spawned in the order listed. Randomized objects (i.e. the object is set to spawn randomly) try to spawn up to 20 times concurrently in any given arena/episode; if unsuccessful, the object is discarded and the arena continues to spawn the next object in the list (if any). -- **Spawn Likelihood**: Early list objects are more likely to spawn than later ones. This is because the configuration file is scanned from top to bottom, and objects are spawned in the order they are found. -- **Agent Spawning**: - - The Agent spawns randomly within the arena bounds if it's spawn position is not specified. - - Specified Agent positions are processed _first_, which might conflict with randomly spawned objects that are spawned _after_ the Agent, as the Agent's position is not known until runtime. - - If you have defined the Agent's position and another object tries to spawn at the same position as the Agent then the environment will always spawn the Agent in that position always, as _the Agent has priority above every other object_. This is to avoid a potential conflict during runtime. - - Some objects can spawn on top of each other (a `0.1` height buffer added to accomodate this). +* **Non-Overlapping**: Objects can only spawn if they don't overlap with others. Overlapping attempts discard the latter object (i.e. the object that is spawned later in the configuration file). This is to avoid object collision issues during runtime. +* **Spawn Order**: Objects are spawned in the order listed. Randomized objects (i.e. the object is set to spawn randomly) try to spawn up to 20 times concurrently in any given arena/episode; if unsuccessful, the object is discarded and the arena continues to spawn the next object in the list (if any). +* **Spawn Likelihood**: Early list objects are more likely to spawn than later ones. This is because the configuration file is scanned from top to bottom, and objects are spawned in the order they are found. +* **Agent Spawning**: + + The Agent spawns randomly within the arena bounds if it's spawn position is not specified. + + Specified Agent positions are processed _first_, which might conflict with randomly spawned objects that are spawned _after_ the Agent, as the Agent's position is not known until runtime. + + If you have defined the Agent's position and another object tries to spawn at the same position as the Agent then the environment will always spawn the Agent in that position always, as _the Agent has priority above every other object_. This is to avoid a potential conflict during runtime. + + Some objects can spawn on top of each other (a `0.1` height buffer added to accomodate this). ### Configuration File Values -- **n: !Arena**: The `n` in `n: !Arena` is a placeholder integer between `0` and `n`, where `n` is the number of arenas defined in a single configuration file. The first arena must start with `0`, upto `n` arenas. Any negative arena numbers are automatically converted to positive integers accordingly, resulting in flexible and robust arena numbering and management. -- **Object Names**: Must match names from [Arena Object Definitions](/docs/Arena-Object-Definitions.md). Unmatched names are ignored and may result in unexpected behavior or fatal errors. -- **Randomization**: Use `-1` or leave blank in `positions`, `sizes`, and `rotations` for random values for any object that supports randomization (see [Arena Object Definitions](/docs/Arena-Object-Definitions.md)). -- **Ground Level Spawning**: Setting `positions.y = 0` spawns objects at ground level (with a `0.1` height buffer to prevent gameobject clipping). -- **Goal Scaling**: Goals (except red zone) scale equally on all axes. For sphere goals, only the `x` component of `Vector3` scales all axes. -- **Arena Height Bounds**: Currently, objects can spawn at any height within the arena, which translates to the `y` component of `Vector3` in the configuration file. However, a recommended height range is between `0` and `50` units, as anything above `50` units will be out of the camera's view until the object falls from the sky and lands on the ground at the specified `x` and `z` coordinates, which may take a while depending on the object's mass and drag properties. This is not an issue for objects that spawn on the ground, as they will spawn at the specified `x` and `z` coordinates, and at ground level (i.e. `y = 0`). -- **Arena Size Bounds**: The arena is currently a square of fixed size `40x40`, meaning the size of the arena is immutable, with the origin of the arena is set to `(0,0)`. You can provide coordinates for objects in the range `[0,40]x[0,40]` as floats. Any coordinates outside this range will be discarded and the object will not spawn. We plan to make the arena size configurable in the future. +* **n: ! Arena**: The `n` in `n: !Arena` is a placeholder integer between `0` and `n`, where `n` is the number of arenas defined in a single configuration file. The first arena must start with `0`, upto `n` arenas. Any negative arena numbers are automatically converted to positive integers accordingly, resulting in flexible and robust arena numbering and management. +* **Object Names**: Must match names from [Arena Object Definitions](/docs/Arena-Object-Definitions.md). Unmatched names are ignored and may result in unexpected behavior or fatal errors. +* **Randomization**: Use `-1` or leave blank in `positions`, `sizes`, and `rotations` for random values for any object that supports randomization (see [Arena Object Definitions](/docs/Arena-Object-Definitions.md)). +* **Ground Level Spawning**: Setting `positions.y = 0` spawns objects at ground level (with a `0.1` height buffer to prevent gameobject clipping). +* **Goal Scaling**: Goals (except red zone) scale equally on all axes. For sphere goals, only the `x` component of `Vector3` scales all axes. +* **Arena Height Bounds**: Currently, objects can spawn at any height within the arena, which translates to the `y` component of `Vector3` in the configuration file. However, a recommended height range is between `0` and `50` units, as anything above `50` units will be out of the camera's view until the object falls from the sky and lands on the ground at the specified `x` and `z` coordinates, which may take a while depending on the object's mass and drag properties. This is not an issue for objects that spawn on the ground, as they will spawn at the specified `x` and `z` coordinates, and at ground level (i.e. `y = 0`). +* **Arena Size Bounds**: The arena is currently a square of fixed size `40x40`, meaning the size of the arena is immutable, with the origin of the arena is set to `(0,0)`. You can provide coordinates for objects in the range `[0,40]x[0,40]` as floats. Any coordinates outside this range will be discarded and the object will not spawn. We plan to make the arena size configurable in the future. ---- \ No newline at end of file +--- diff --git a/docs/gettingStarted/Getting-Started.md b/docs/gettingStarted/Getting-Started.md index a75d63e86..30b8c8091 100644 --- a/docs/gettingStarted/Getting-Started.md +++ b/docs/gettingStarted/Getting-Started.md @@ -12,7 +12,6 @@ This document should be your introductory document to Animal-AI, which outlines * [If you are a contributor](#if-you-are-a-contributor) - # What is Animal-AI? Animal-AI is a platform for training and testing AI agents and human participants on a variety of tasks that require a rich understanding of the environment. The platform is built upon the Unity game engine, with Ml-Agents Toolkit used for backend functionality for training, and is designed to be extensible and easy to use. The platform is being used to study cognitive capabilities across humans, animals and AI agents comparatively across a variety of tasks and experiments. diff --git a/docs/gettingStarted/Installation-Guide.md b/docs/gettingStarted/Installation-Guide.md index 1bf2886c0..61612aca5 100644 --- a/docs/gettingStarted/Installation-Guide.md +++ b/docs/gettingStarted/Installation-Guide.md @@ -1,18 +1,20 @@ # Detailed Installation Guide #### Table of Contents -- [Introduction](#introduction) - - [Steps:](#steps) - - [1. Installing Python](#1-installing-python) - - [2. Cloning the Animal-AI Repository (Optional)](#2-cloning-the-animal-ai-repository-optional) - - [3. Setting Up a Virtual Environment (Optional)](#3-setting-up-a-virtual-environment-optional) - - [4. Installing Dependencies](#4-installing-dependencies) - - [5. Downloading the Animal-AI Environment](#5-downloading-the-animal-ai-environment) - - [6. Starting Animal-AI](#6-starting-animal-ai) - - [General Notes](#general-notes) - - [Troubleshooting](#troubleshooting) + +* [Introduction](#introduction) + + [Steps:](#steps) + + [1. Installing Python](#1-installing-python) + + [2. Cloning the Animal-AI Repository (Optional)](#2-cloning-the-animal-ai-repository-optional) + + [3. Setting Up a Virtual Environment (Optional)](#3-setting-up-a-virtual-environment-optional) + + [4. Installing Dependencies](#4-installing-dependencies) + + [5. Downloading the Animal-AI Environment](#5-downloading-the-animal-ai-environment) + + [6. Starting Animal-AI](#6-starting-animal-ai) + + [General Notes](#general-notes) + + [Troubleshooting](#troubleshooting) ## Introduction + Welcome to the comprehensive installation guide for Animal-AI. This guide is tailored for users who may not be familiar with Python dependencies, navigating GitHub repositories, or working with Unity. It's also here to help you smoothly navigate through any installation hiccups – because let's face it, *it's custom software installation... when **isn't** there a hiccup or two?*. For **Windows** Users: @@ -21,59 +23,65 @@ This guide is primarily written with Windows users in mind. We've tried to make For **Mac** and **Linux** Users: Similarly for Mac, most of the instructions for Windows users should still apply to you. If you are using MacOS, you may also need to run this command: `chmod -R 777 AnimalAI.app` in your MacOS terminal to unlock permissions for running the application. -If you're a Linux user, you're likely more comfortable with command-line interfaces and installations. Please **note** that if you are using Linux, you may need to make the .exe file executable: Simply run this command in your terminal: `chmod +x env/AnimalAI.x86_64`. Please also make sure that when you extract the folder, you move the files inside the sub-directory to its parent folder. +If you're a Linux user, you're likely more comfortable with command-line interfaces and installations. Please **note** that if you are using Linux, you may need to make the .exe file executable: Simply run this command in your terminal: `chmod +x env/AnimalAI.x86_64` . Please also make sure that when you extract the folder, you move the files inside the sub-directory to its parent folder. ## Steps: + ### 1. Installing Python -- **Download Python**: Obtain Python 3.9.x from [Python's official website](https://www.python.org/downloads/). -- **Run the Installer**: Follow the installation instructions. Ensure to **add Python to your PATH** (via the checkbox). Note: if you're doing a custom intallation, it is recommended to keep the `install pip` box ticked and use `pip` to install dependencies. -- **Check Installation**: Open a Command Prompt terminal and run `python --version`. You should see the version you installed. Make sure it's Python 3.9.x. -- todo: if using conda, specify python to 3.9 in the conda environment setup (conda create --name your_env_name python=3.9) or application +* **Download Python**: Obtain Python 3.9.x from [Python's official website](https://www.python.org/downloads/). +* **Run the Installer**: Follow the installation instructions. Ensure to **add Python to your PATH** (via the checkbox). Note: if you're doing a custom intallation, it is recommended to keep the `install pip` box ticked and use `pip` to install dependencies. +* **Check Installation**: Open a Command Prompt terminal and run `python --version`. You should see the version you installed. Make sure it's Python 3.9.x. +* todo: if using conda, specify python to 3.9 in the conda environment setup (conda create --name your_env_name python=3.9) or application ### 2. Cloning the Animal-AI Repository (Optional) -- **Prepare a Directory**: Create a root folder for the AnimalAI project for better organization. -- **Clone the Repository**: Options include: - - Downloading the `.zip` file from [Animal-AI GitHub](https://github.com/Kinds-of-Intelligence-CFI/animal-ai) and extracting it. - - Using [GitHub Desktop](https://desktop.github.com/) for direct cloning. - - Cloning via the [GitHub CLI](https://docs.github.com/en/github-cli/github-cli/about-github-cli). -- **Check**: The root folder should contain the `animal-ai-main` folder. +* **Prepare a Directory**: Create a root folder for the AnimalAI project for better organization. +* **Clone the Repository**: Options include: + + Downloading the `.zip` file from [Animal-AI GitHub](https://github.com/Kinds-of-Intelligence-CFI/animal-ai) and extracting it. + + Using [GitHub Desktop](https://desktop.github.com/) for direct cloning. + + Cloning via the [GitHub CLI](https://docs.github.com/en/github-cli/github-cli/about-github-cli). +* **Check**: The root folder should contain the `animal-ai-main` folder. ### 3. Setting Up a Virtual Environment (Optional) -- **Creating a Virtual Environment**: Useful for managing dependencies. - - **Python**: Use `python -m venv your_env_name` and activate it in the `Scripts` directory with `activate`. - - **Conda**: Use `conda create --name your_env_name` and activate with `conda activate your_env_name`. + +* **Creating a Virtual Environment**: Useful for managing dependencies. + + **Python**: Use `python -m venv your_env_name` and activate it in the `Scripts` directory with `activate`. + + **Conda**: Use `conda create --name your_env_name` and activate with `conda activate your_env_name`. For more information on virtual environments, refer to the [Python Documentation](https://docs.python.org/3/tutorial/venv.html) or [Conda Documentation](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html). ### 4. Installing Dependencies -- **Navigate to the Repository**: Go to `animal-ai-main`. -- **Install Dependencies**: - - **Using pip**: Run `pip install animalai`. This will install the dependencies necessary to run Animal-AI, located in the `animalai` package from PyPI (Python Package Index). - - **Using Conda**: Install pip (`conda install pip`), then run `pip install animalai`. - - **Using a Virtual Environment**: Activate your virtual environment, then run `pip install animalai`. -- **Check**: Run `pip list` to see if `animalai` is installed. You should obtain the latest version of the package automatically. + +* **Navigate to the Repository**: Go to `animal-ai-main`. +* **Install Dependencies**: + + **Using pip**: Run `pip install animalai`. This will install the dependencies necessary to run Animal-AI, located in the `animalai` package from PyPI (Python Package Index). + + **Using Conda**: Install pip (`conda install pip`), then run `pip install animalai`. + + **Using a Virtual Environment**: Activate your virtual environment, then run `pip install animalai`. +* **Check**: Run `pip list` to see if `animalai` is installed. You should obtain the latest version of the package automatically. ### 5. Downloading the Animal-AI Environment -- **Download**: Get the version for your OS from the `Releases` section in the repository. -- **Extract**: Unzip into the `env` folder in the main repository. We use the `env` folder to store the environment files. You can use WinRAR or 7-Zip to extract the files. -- **Check**: The `env` folder should contain the `.exe` file and other files from the `.zip/.rar` download. + +* **Download**: Get the version for your OS from the `Releases` section in the repository. +* **Extract**: Unzip into the `env` folder in the main repository. We use the `env` folder to store the environment files. You can use WinRAR or 7-Zip to extract the files. +* **Check**: The `env` folder should contain the `.exe` file and other files from the `.zip/.rar` download. ### 6. Starting Animal-AI -- You can now start using Animal-AI by launching the application for your OS, located in the directory where you saved the folder, typically in your Downloads folder. _Note that Animal-AI does not need to be installed in your system to run._ - - **Windows**: Run `env/AnimalAI.exe`. - - **Mac**: Run `env/AnimalAI.app`. - - **Linux**: Run `env/AnimalAI.x86_64`. -- **Note**: If you're using a virtual environment, make sure to activate it before running Animal-AI. -- **Check**: The Animal-AI application should open in a new window, with a brief Unity loading screen, which indicates you have successfully installed and started Animal-AI. +* You can now start using Animal-AI by launching the application for your OS, located in the directory where you saved the folder, typically in your Downloads folder. _Note that Animal-AI does not need to be installed in your system to run._ + + **Windows**: Run `env/AnimalAI.exe`. + + **Mac**: Run `env/AnimalAI.app`. + + **Linux**: Run `env/AnimalAI.x86_64`. +* **Note**: If you're using a virtual environment, make sure to activate it before running Animal-AI. +* **Check**: The Animal-AI application should open in a new window, with a brief Unity loading screen, which indicates you have successfully installed and started Animal-AI. ### General Notes -Folder navigation in Windows is performed using the `cd` command, e.g. if the current directory is shown as `:C\Users\Name` and you want to go to your new Animal-AI root folder called "AAI", you would type `cd AAI` and it will now show you are at `:C\Users\Name\AAI`. To go to the *parent* directory (e.g. in this case `:C\Users`), you would type `cd..` and if your directory name contains spaces, use speech marks e.g. `cd "AAI Folder"`. You can also use the `dir` command to list the contents of the current directory, and `dir /b` to list the contents without any additional information. -Everything you need to run scripts in Animal-AI (including the correct version of Unity's `ml-agents` package) is found in the Python Index Package `animalai`. This is installed using `pip` or `conda` as described above. The `animalai` package is a Python wrapper for the Unity environment, and is the only dependency you need to install. The `animalai` package is also the only dependency you need to import in your scripts, and it will import everything else you need from the `animalai` package itself. +Folder navigation in Windows is performed using the `cd` command, e.g. if the current directory is shown as `:C\Users\Name` and you want to go to your new Animal-AI root folder called "AAI", you would type `cd AAI` and it will now show you are at `:C\Users\Name\AAI` . To go to the *parent* directory (e.g. in this case `:C\Users` ), you would type `cd..` and if your directory name contains spaces, use speech marks e.g. `cd "AAI Folder"` . You can also use the `dir` command to list the contents of the current directory, and `dir /b` to list the contents without any additional information. + +Everything you need to run scripts in Animal-AI (including the correct version of Unity's `ml-agents` package) is found in the Python Index Package `animalai` . This is installed using `pip` or `conda` as described above. The `animalai` package is a Python wrapper for the Unity environment, and is the only dependency you need to install. The `animalai` package is also the only dependency you need to import in your scripts, and it will import everything else you need from the `animalai` package itself. ### Troubleshooting + You can then start using Animal-AI! Any problems, please get in touch (ia424@cam.ac.uk / [alhasacademy96](https://github.com/alhasacademy96/)) or post an issue on the GitHub repository. -Visit our FAQ page for more information on common issues and solutions [here](/docs/FAQ.md). \ No newline at end of file +Visit our FAQ page for more information on common issues and solutions [here](/docs/FAQ.md). diff --git a/project/AAI-RoadMap.md b/project/AAI-RoadMap.md index 70394bcc1..2cc84c0a0 100644 --- a/project/AAI-RoadMap.md +++ b/project/AAI-RoadMap.md @@ -1,8 +1,10 @@ # Animal-AI RoadMap #### Table of Contents -- [Project Overview](#overview) -- [Roadmap](#roadmap) + +* [Project Overview](#overview) +* [Roadmap](#roadmap) + # Project Overview @@ -12,44 +14,43 @@ We wish to enable the possibility of interdisciplinary research to better unders ## 2.1 Initial Port + RayCasts (Released 01/07/2021) -- [x] Port Unity Environment from ml-agents 0.15 to 2.0 -- [x] Port basic python scripts from ml-agents 0.15 to 2.0 -- [x] Add RayCast observations +* [x] Port Unity Environment from ml-agents 0.15 to 2.0 +* [x] Port basic python scripts from ml-agents 0.15 to 2.0 +* [x] Add RayCast observations The environment was ported to ml-agents 2.0. Raycast observations added and ensured to be roughly backwards compatible with 2.0. ## 2.2 Health and Basic Scripts (Released 13/07/2021) -- [x] Switched from reward system to health system (from DRL perspective functionally similar but unlocks more tasks and better integration with a continual learning setting) -- [x] Added decaying rewards -- [x] Improved Hotzone/Deathzone graphics and allow scaling -- [x] Added/improved python wrappers for all main usecases (play, openAIgym, lowlevelAPI, mlagents-learn) -- [x] Added heuristic agent for testing/debugging -- [x] Improved play mode overlay +* [x] Switched from reward system to health system (from DRL perspective functionally similar but unlocks more tasks and better integration with a continual learning setting) +* [x] Added decaying rewards +* [x] Improved Hotzone/Deathzone graphics and allow scaling +* [x] Added/improved python wrappers for all main usecases (play, openAIgym, lowlevelAPI, mlagents-learn) +* [x] Added heuristic agent for testing/debugging +* [x] Improved play mode overlay Previous setting had an abstract system where food = +ve reward and time = -ve reward. This will be converted to decaying health that must be maintained by seeking our reward. Many tasks are functionally identical, but this setup is better for future tasks and also persistent survival. Other additions are improvements to the environment that go with this change and the initial setup of scripts as tutorials for using different training settings. ## 2.3 Experiment, Object, and Graphical Improvements (Released 13/10/2021) -- [x] Major graphics update to all items -- [x] Goals that decay/ripen/change size -- [x] More items for setting up experiments -- [x] Improved documentation +* [x] Major graphics update to all items +* [x] Goals that decay/ripen/change size +* [x] More items for setting up experiments +* [x] Improved documentation This update is focused on improving the environment for experimentation. This includes a major graphics update to all items, the addition of goals that decay/ripen/change size, and more items for setting up experiments. This update also includes improved documentation which enhances user experience. ## 3.3 Animal-AI 'Version 3' [Major Release - 25/12/2023] -- [x] Migrate to Unity Editor 2022 -- [x] Migrate to ml-agents 0.30.0 -- [x] Fix major graphical bugs affecting shadows and object placement -- [x] Add interactive objects to environment +* [x] Migrate to Unity Editor 2022 +* [x] Migrate to ml-agents 0.30.0 +* [x] Fix major graphical bugs affecting shadows and object placement +* [x] Add interactive objects to environment - [x] Add new objects to the environment that are interactable (SpawnerButton) by user and agents -- [x] Add more objects to the RayCast Parser (Unity and Python sides) -- [x] Overhaul documentation and tutorials for the environment (play and training) +* [x] Add more objects to the RayCast Parser (Unity and Python sides) +* [x] Overhaul documentation and tutorials for the environment (play and training) - [x] Restructure documentation and GitHub repository to be more user friendly and easier to navigate - --- -_This roadmap is subject to change and is currently a work in progress._ \ No newline at end of file +_This roadmap is subject to change and is currently a work in progress._ diff --git a/project/AAI-Versions-Archive.md b/project/AAI-Versions-Archive.md index f2f7effae..2a683a552 100644 --- a/project/AAI-Versions-Archive.md +++ b/project/AAI-Versions-Archive.md @@ -10,4 +10,4 @@ Please note that we may not be able to respond to a problem related to the older _If you'd like to use any of the previous versions, please download the corresponding version of the environment from the table above._ The installation instructions for the older versions are the same as the latest version. Please refer to the [Installation Guide](/docs/gettingStarted/Installation-Guide.md) for more details. ---- \ No newline at end of file +---