modes/train/ #8075
Replies: 174 comments 474 replies
-
how to print IOU and f-score with the training result? |
Beta Was this translation helpful? Give feedback.
-
How are we able to save sample labls and predictions on the validation set during training? I remember it being easy from yolov5 but I have not been able to figure it out with yolov8. |
Beta Was this translation helpful? Give feedback.
-
If I am not mistaken, the logs shown during training also contain the box(P,R,[email protected] and [email protected]:0.95) and mask(P,R,[email protected] and [email protected]:0.95) for validation set during each epoch. Then why is it happening that during model.val() using the best.pt, I am getting worse metrics. From the training and validation curves, it is clear that the model is overfitting for the segmentation task but that is separate issue of overfitting. Can you please help me out in this? |
Beta Was this translation helpful? Give feedback.
-
So, imgsz works different when training than when predicting? For train: if it's an Is this right? |
Beta Was this translation helpful? Give feedback.
-
Hi all, I have a segment model with customed data with single class, but there is a trend to overfit in the recent several training results, I tried adding more data in the training set with reduce box_loss and cls_loss in val, but dfl_loss is increasing. Is there suggestion to tuing the model. Thanks a lot. |
Beta Was this translation helpful? Give feedback.
-
I have a question for training the segmentation model. I have objects in my dataset that screen each other, such that the top object separates the segmentation mask of the bottom object into two independent parts. as far as I can see, the coordinates of each point are listed sequentially in the label file. If I add the points of the two masks one after the other in the coordinates of the same object, will I solve the problem? |
Beta Was this translation helpful? Give feedback.
-
Hello there! |
Beta Was this translation helpful? Give feedback.
-
Hello, I am working on a project for android devices. The gpu and cpu powers of the device I have are weak. Will it speed up if I make the imgsz value 320 for train? Or what are your recommendations? What happens if the imgsz parameter for training is 640 and the imgsz parameter for prediction is 320? Or what changes if imgsz for training is 320 and imgsz for prediction is 320? Sorry for my English Note: I converted it to tflite model. Thanks. You are amazing |
Beta Was this translation helpful? Give feedback.
-
I've come to rely on YOLOv8 in my daily work; it's remarkably user-friendly. Thank you to the Ultralytics team for your excellent work on these models! I'm currently tackling a project focused on detecting minor defects on automobile engine parts. As the defects will be a smaller object in a given frame ,could you offer guidance on training arguments or techniques while training a model that might improve performance for this type of data? I'm also interested in exploring attention mechanisms to enhance the model performance, but I'd appreciate help understanding how to implement this. Special appreciation to Ultralytics team. |
Beta Was this translation helpful? Give feedback.
-
Running this provided example Which lead me to this Stackoverflow: https://stackoverflow.com/q/75111196/815507 There are solutions from Stackoverflow: I wonder if you could help and update the guide to provide the best resolution? |
Beta Was this translation helpful? Give feedback.
-
We need to disable blur augmentation. I have filed an issue, Glenn suggested me to use blur=0, but it is not a valid argument. #8824 |
Beta Was this translation helpful? Give feedback.
-
How can I train YOLOv8 with my custom dataset? |
Beta Was this translation helpful? Give feedback.
-
Hey, Was trying out training custom object detection model using pretrained YOLO-v8 model.
0% 0/250 [00:00<?, ?it/s] |
Beta Was this translation helpful? Give feedback.
-
Hi! I'm working on a project where I plan to use YOLOv8 as the backbone for object detection, but I need a more hands-on approach during the training phase. How to I train the model manually, looping through epochs, perform forward propagation, calculate loss functions, backpropagate, and update weights? At the moment the model.train() seems to handle all of this automatically in the background. The end goal is knowledge distillation, but for a start I need to access these things. I haven't been able to find any examples of YOLOv8 being used in this way, some code and tips would be helpful. |
Beta Was this translation helpful? Give feedback.
-
Im trying to understand concept of training. I would like to extend default classes with helmet, gloves, etc.
Thanks in advance |
Beta Was this translation helpful? Give feedback.
-
Hi folk! When one calls And another question. Is it that the files created during training under |
Beta Was this translation helpful? Give feedback.
-
Hello,I am using some augumentation operations such as "mosaic , copy_fraction , eraser" while training a yolov8-seg model. But I have a question .These operations may destroy the integrity of the label, for example, the original label is a person, but after these augmentation operations it may become half a person, may I ask this will mislead the model to learn that this half a person is also a whole person? Thank you! |
Beta Was this translation helpful? Give feedback.
-
from ultralytics import YOLO if name == 'main':
|
Beta Was this translation helpful? Give feedback.
-
Can YOLOv8 directly convert the fish in the video into black and white mask image output after segmentation prediction? |
Beta Was this translation helpful? Give feedback.
-
hi, I'm just wondering if yolov8 accepts class weights, in the data.yaml file: names: ['class 1', 'class 2', 'class 3']
classes_weights: [c1_weight, c2_weight, c3_weight] if no how to incorpoorate wighted loss in yolov8 |
Beta Was this translation helpful? Give feedback.
-
1.i am using yolov8x-world.pt to train the model with what you have provided settings but it took 11 days to completed training for 100 epochs why? what is the proper way to train and get high accuracy in short time period ? |
Beta Was this translation helpful? Give feedback.
-
I have set kpt_shape to [8,3] in the yaml file, but the prompt message is: It seems that a 14-column label file is still required. My label file has 5+24 columns. |
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
Hi, I have been using this command
|
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
How can the augmented data during the training process be outputted? |
Beta Was this translation helpful? Give feedback.
-
Please tell me how to set imgsz when I use the yolo11 model for training. Can it only be an integer (this means that the final input is a square)? After I modify this parameter, will the input interface of the network change the input of data? There are corresponding transformations, such as imgsz=1280, does it mean that my input data will be resized to 1280*1280 and input into the network for training? Thanks! |
Beta Was this translation helpful? Give feedback.
-
How can I do fine tuning with existing .onnx weights? |
Beta Was this translation helpful? Give feedback.
-
when training, how does the model know if the training data is BGR or RGB? |
Beta Was this translation helpful? Give feedback.
-
I referred to your document, and it says that running one epoch on the COCO dataset with an A100 GPU took 20 minutes and 36 seconds. Now, I have the same COCO dataset, and I want to know if using an RTX 4090 GPU will allow me to complete one epoch in 20 minutes and 36 seconds or less, ideally under 20 minutes. This is because you said that the RTX 4090 is faster than the A100 for most tasks! |
Beta Was this translation helpful? Give feedback.
-
modes/train/
Step-by-step guide to train YOLOv8 models with Ultralytics YOLO including examples of single-GPU and multi-GPU training
https://docs.ultralytics.com/modes/train/
Beta Was this translation helpful? Give feedback.
All reactions