You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AAI’s episodic structure is currently tied to arenas: agents spawn into an arena and interact with it until it completes (e.g. because the goal is reached, timeout, etc.). Then the agent is reset and the next arena is loaded.
Animals do not interact with their environment in an episodic way:
Learning happens continuously
Animals remember events from arbitrarily far in the past and use this knowledge to shape their behaviour
Context switches are not well defined and instead happen gradually or imperceptibly
Tests in Cognitive Science can make use of these features. For example an episodic memory task may involve allowing an agent to learn a route through a maze to a goal, and then presenting it with the same maze but with a path blocked (Sara and Seraphina are working on tests of this type in babies and AI, respectively).
In AAI currently the only way an agent can learn about the structure of the arena and use that knowledge elsewhere is through training on that arena. This is a problem because training can destroy previous capabilities (catastrophic forgetting), and so we couldn’t analyse a “generalist agent’s” ability to learn the structure of new mazes in this way.
A more valid way to explore this would be to have an agent learn about the structure of the maze in context (i.e. during an episode) and then have that episode continue when the maze is switched to a new configuration.
To allow this, users should be able to specify that multiple arenas can be grouped together into one episode.
The text was updated successfully, but these errors were encountered:
AAI’s episodic structure is currently tied to arenas: agents spawn into an arena and interact with it until it completes (e.g. because the goal is reached, timeout, etc.). Then the agent is reset and the next arena is loaded.
Animals do not interact with their environment in an episodic way:
Learning happens continuously
Animals remember events from arbitrarily far in the past and use this knowledge to shape their behaviour
Context switches are not well defined and instead happen gradually or imperceptibly
Tests in Cognitive Science can make use of these features. For example an episodic memory task may involve allowing an agent to learn a route through a maze to a goal, and then presenting it with the same maze but with a path blocked (Sara and Seraphina are working on tests of this type in babies and AI, respectively).
In AAI currently the only way an agent can learn about the structure of the arena and use that knowledge elsewhere is through training on that arena. This is a problem because training can destroy previous capabilities (catastrophic forgetting), and so we couldn’t analyse a “generalist agent’s” ability to learn the structure of new mazes in this way.
A more valid way to explore this would be to have an agent learn about the structure of the maze in context (i.e. during an episode) and then have that episode continue when the maze is switched to a new configuration.
To allow this, users should be able to specify that multiple arenas can be grouped together into one episode.
The text was updated successfully, but these errors were encountered: