You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 31, 2023. It is now read-only.
At present, I want to run a torch program with multiple workers, but GPU CUDA needs to configure compatible parameters in the main thread. At present, the relevant configuration documents are not found in the documents
The text was updated successfully, but these errors were encountered:
[2021-12-02 16:36:56 +0800] [64614] [ERROR] Exception occurred while handling uri: 'http://localhost:8000/predict/file'
Traceback (most recent call last):
File "handle_request", line 83, in handle_request
class Sanic(BaseSanic, metaclass=TouchUpMeta):
File "/opt/projects/ai-server/server.py", line 29, in predict_file
result = app.processor.process(img)
File "/opt/projects/ai-server/models/Detect.py", line 82, in process
img = self.pre_processor(image)
File "/opt/projects/ai-server/models/Detect.py", line 74, in pre_processor
img = torch.from_numpy(img).to(self.device)
File "/home/breakfox/anaconda3/envs/ai-server/lib/python3.9/site-packages/torch/cuda/__init__.py", line 204, in _lazy_init
raise RuntimeError(
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Exception occurred while handling uri: 'http://localhost:8000/predict/file'
Traceback (most recent call last):
File "handle_request", line 83, in handle_request
class Sanic(BaseSanic, metaclass=TouchUpMeta):
File "/opt/projects/ai-server/server.py", line 29, in predict_file
result = app.processor.process(img)
File "/opt/projects/ai-server/models/Detect.py", line 82, in process
img = self.pre_processor(image)
File "/opt/projects/ai-server/models/Detect.py", line 74, in pre_processor
img = torch.from_numpy(img).to(self.device)
File "/home/breakfox/anaconda3/envs/ai-server/lib/python3.9/site-packages/torch/cuda/__init__.py", line 204, in _lazy_init
raise RuntimeError(
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
At present, I want to run a torch program with multiple workers, but GPU CUDA needs to configure compatible parameters in the main thread. At present, the relevant configuration documents are not found in the documents
The text was updated successfully, but these errors were encountered: