YoloV8 Training Slower than YoloV5? #1819
-
I'm trying to train a custom dataset on yolov8n.pt. I used to use yolov5 until I saw V8 was out and decided to try it. My first attempt at training the dataset took over 1200 minutes, while training on yolov5 only took around 200. The yolov5 format looks as such: The V8 training code is here:
Even on 1000 epochs V8 takes longer than 2000 epochs of V5. I noticed that the V8 takes some time to start up each epoch (GPU spikes take longer between each set), which it then trains quickly. The V5 is much quicker to start up the epochs for training. Is there a setting I'm forgetting for V8 to have the epochs start faster? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Found a temporary solution another thread relating to the long epoch time: Set workers=0. From what I can tell its a windows bug. Might also use linux to make things faster until a solution for this has been found. |
Beta Was this translation helpful? Give feedback.
@ajrenzo this should be resolved in the latest version of YOLOv8, which utilizes the faster InfiniteDataloader by default now. You can update your package with
pip install -U ultralytics