You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 25, 2023. It is now read-only.
@keon Thanks for your applicable code just one question how we can add K frame to this as said in section 4.1 last sentences of first paragraph of Mnih et al. Nature 2015
4.1 Preprocessing and Model Architecture
Working directly with raw Atari frames, which are 210 � 160 pixel images with a 128 color palette,
can be computationally demanding, so we apply a basic preprocessing step aimed at reducing the
input dimensionality. The raw frames are preprocessed by first converting their RGB representation
to gray-scale and down-sampling it to a 110 �84 image. The final input representation is obtained by
cropping an 84 � 84 region of the image that roughly captures the playing area. The final cropping
stage is only required because we use the GPU implementation of 2D convolutions from [11], which
expects square inputs. For the experiments in this paper, the function � from algorithm 1 applies this
preprocessing to the last 4 frames of a history and stacks them to produce the input to the Q-function.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
@keon Thanks for your applicable code just one question how we can add K frame to this as said in section 4.1 last sentences of first paragraph of Mnih et al. Nature 2015
4.1 Preprocessing and Model Architecture
Working directly with raw Atari frames, which are 210 � 160 pixel images with a 128 color palette,
can be computationally demanding, so we apply a basic preprocessing step aimed at reducing the
input dimensionality. The raw frames are preprocessed by first converting their RGB representation
to gray-scale and down-sampling it to a 110 �84 image. The final input representation is obtained by
cropping an 84 � 84 region of the image that roughly captures the playing area. The final cropping
stage is only required because we use the GPU implementation of 2D convolutions from [11], which
expects square inputs. For the experiments in this paper, the function � from algorithm 1 applies this
preprocessing to the last 4 frames of a history and stacks them to produce the input to the Q-function.
The text was updated successfully, but these errors were encountered: