-
Notifications
You must be signed in to change notification settings - Fork 947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Adding cuda:n device allocation #694
base: main
Are you sure you want to change the base?
Conversation
Merge ProtectionsYour pull request matches the following merge protections and will not be merged until they are valid. 🟢 Enforce conventional commitWonderful, this rule succeeded.Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/
|
Signed-off-by: ahn <[email protected]>
08872b9
to
7425436
Compare
Signed-off-by: ahn <[email protected]>
the same functionality. In case the alias envvar is set and the user tries to override the | ||
parameter in settings initialization, Pydantic treats the parameter provided in __init__() | ||
as an extra input instead of simply overwriting the evvar value for that parameter. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this comment is useful to stay. Why to delete it?
device: AcceleratorDevice = AcceleratorDevice.AUTO | ||
device: str = "auto" | ||
|
||
@validator("device") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to avoid using the @validator
because it is deprecated in Pydantic v2. Use instead the newer "Field Validators" (https://docs.pydantic.dev/latest/concepts/validators/#field-validators)
This PR is WIP enables the allocation of cuda devices as the current implementation defaults to cuda:0. This enables launching on multi-gpu and not being restricted on cluster systems with device allocation.
Note: Further investigation is needed to make easyocr work with cuda:n as easyocr when using GPU it wraps the models in torch.DataParallel, which causes cuda:0 to be utilised.