Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I turn off KV cache? #10

Open
purejomo opened this issue May 25, 2024 · 2 comments
Open

Can I turn off KV cache? #10

purejomo opened this issue May 25, 2024 · 2 comments

Comments

@purejomo
Copy link

Hello,

Could you please advise me on how to disable the KV cache?
I would also appreciate any guidance on how to implement this change in code.

Thank you for your assistance.

@JAYANDJEAN
Copy link

You can refer to this: https://github.com/JAYANDJEAN/From_Transformer_to_GPTs/blob/main/04_llama2/llama.py I use use_cache to control whether to use the cache, because we don't need to use the cache during training.

@purejomo
Copy link
Author

@JAYANDJEAN
Thanks
It means I can turn off caching by modifying codes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants