-
Notifications
You must be signed in to change notification settings - Fork 0
Example: play a match
Please refer to the Configuration Files page for this part.
The available maps depend on the current configuration files. A scenario is composed by a static part, the GameBoard
, and a dynamic part, the GameState
. You just need to know the name of the scenario to load:
board, state = buildScenario('Junction')
To play a game you need two agents: one for the red player and one for the blue player.
Simple agents can be instantiated directly.
red = GreedyAgent(RED, seed=seed)
blue = GreedyAgent(BLUE, seed=seed)
Depending on the agent, there could be some parameters available.
red = AlphaBetaAgent(RED, maxDepth=3)
blue = AlphaBetaAgent(BLUE, maxDepth=3, timeLimit=20)
All agents that are build using Machine Learning techniques need to load the model(s) from disk. This is managed by the agent itself, but you need to specify the path to the required model files. The model files can be very specific for a particular scenario or generic.
red = ClassifierAgent(RED, 'models/Junction_cls_red.joblib', seed=seed)
blue = ClassifierAgent(BLUE, 'models/Junction_cls_blue.joblib', seed=seed)
red = RegressionAgent(RED, 'models/Junction_reg_red.joblib', seed=seed)
blue = RegressionAgent(BLUE, 'models/Junction_reg_blue.joblib', seed=seed)
Some other ML-agents require multiple files to works.
red = RegressionMultiAgent(RED,
'models/Junction_red_attack.joblib',
'models/Junction_red_move.joblib',
'models/Junction_red_pass.joblib',
seed)
blue = RegressionMultiAgent(BLUE,
'models/Junction_blue_attack.joblib',
'models/Junction_blue_move.joblib',
'models/Junction_blue_pass.joblib',
seed)
The MatchManager
is the object that is in charge of control the evolution of a game with the given agents.
mm = MatchManager('', red, blue, board, state, seed=seed)
mm.play()
After a game ended, it is possible to collect some information from the MatchManager
object itself, like the winner.
logger.info('winner: ', mm.winner)
Utility functions can help convert the recorded history of actions in a Pandas DataFrame
.
# it is also possible to get information on the history of played actions...
actions_cols = vectorActionInfo()
actions_data = [vectorAction(x) for x in mm.actions_history]
df_actions = pd.DataFrame(columns=actions_cols, data=actions_data)
Or also the evolution of the GameState
s.
states_cols = vectorStateInfo()
states_data = [vectorState(x) for x in mm.states_history]
df_states = pd.DataFrame(columns=states_cols, data=states_data)
Or vector informations on the GameBoard
.
board_cols = vectorBoardInfo()
board_data = [vectorBoard(board, s, a) for s, a in zip(mm.states_history, mm.actions_history)]
df_board = pd.DataFrame(columns=board_cols, data=board_data)
And is also possibly to collect from the agents the data recorded using the register()
method.
df_red = mm.red.createDataFrame()
df_blue = mm.blue.createDataFrame()
- Rules Players
- Match Structure
- Game Status
- Line of Fire (LOF) and Line of Sight (LOS)
- Actions
- Game Board