Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NvEnc Limit Stop Testing for device -- Various Fixes #28

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

nathangur
Copy link

Issue relating to this PR: #27

  • Better Nvidia driver limit checks with a now persistent known limit of the GPU driver across tests.

  • Fixed some weird cuda stuff

  • Use ffprobe now for checking the files

  • More comprehensive and in-depth ffmpeg error logging

  • Fixed MaxStreams for Nvidia GPU's always being 1 after a test was finished

Possible issues:

On my machine the CPU likes to jump from 1 worker to 9 workers, no clue if it's just really excited to be benchmarking or if it's just bugged.

- Added more comprehesive and in-depth ffmpeg error logging

- Fixed MaxStreams for nvidia gpu's always being 1 after a test was finished

- Added better Nvidia driver limit checks with a now persistant known limit of the gpu driver across tests.

- Fixed some wierd cuda stuff

- Changed: Uses ffprobe now for checking the files

Possible issues:

On my machine the cpu likes to jump from 1 worker to 9 workers, no clue if it's just really excited to be benchmarking or if it's just bugged.
@BotBlake
Copy link
Owner

BotBlake commented Oct 1, 2024

Note: The high jump in workers is expected.
This is part of our "worker upscaling" logic.

For upscaling the amount of workers, we generaly take a look at the speed of the run.
So e.g. :

Run: 1, Workers: 1, Speed: 5.2x
-> Workers = Workers + Speed
Run: 2, Workers: 6, Speed: 1.3x
[...]

We also have downscaling measures that will compensate for too heavy upscaling.

I assume this is the reason that your GPU jumps from 1 workers straight to 9.
Same should also happen on GPU tests, if the GPU is powerfull enough.

That way we save loads of time compared to upscaling by 1 every run

…requires a failure to be present.

Fix CPU being "None" when the server expects it to be an intenger.
@BotBlake
Copy link
Owner

BotBlake commented Oct 2, 2024

Note:
The Script will Scale Up the Worker amount, until some sort of failure happened.

In some specific conditions we Scale down to verify that we found the absolute Maximum of streams.

Adding a value to failure_reason implies that a failure has happened!

@BotBlake BotBlake self-requested a review October 5, 2024 12:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants