Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lock Down Sample Configs #85

Merged
merged 20 commits into from
Oct 3, 2024
Merged

Lock Down Sample Configs #85

merged 20 commits into from
Oct 3, 2024

Conversation

JSabadin
Copy link
Contributor

@JSabadin JSabadin commented Oct 1, 2024

Configuration Updates

Summary

This PR introduces the creation of light and heavy configuration files for various tasks, including segmentation, detection, keypoints, and classification.

Key Changes

  • Added light and heavy model configurations for:
    • Segmentation
    • Detection
    • Keypoints
    • Classification

@JSabadin JSabadin requested a review from a team as a code owner October 1, 2024 11:20
@JSabadin JSabadin requested review from kozlov721, klemen1999, tersekmatija and conorsim and removed request for a team October 1, 2024 11:20
@github-actions github-actions bot added the fix Fixing a bug label Oct 1, 2024
Copy link
Collaborator

@klemen1999 klemen1999 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments. Overall let's try to handle variants similarly as we do with e.g. EfficientRep or RepPANNeck.

@@ -1,25 +1,24 @@
# Example configuration for training a predefined segmentation model
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: Let's add this comment in other configs sample configs as well and describe what are they for (e.g. "Example configuration for training a light segmentation model" - or something similar)

@@ -105,7 +96,7 @@ trainer:
n_sanity_val_steps: 1
profiler: null
verbose: True
batch_size: 4
batch_size: 8
accumulate_grad_batches: 1
epochs: &epochs 200
n_workers: 8
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we double check if train_metrics_interval parameter is even used anywhere? Because I believe we don't actually call a function where this is used. If that is the case then we should remove this config parameter (and the function)

normalize:
active: True

batch_size: 4
batch_size: 8
epochs: &epochs 200
n_workers: 4
validation_interval: 10
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's include exporter.output_names to this config as well. For detection models to work with dai.YoloNeuralNetwork out of the box (with included postprocessing) the output names should be [output1_yolov6r2, output2_yolov6r2, output3_yolov6r2]

@@ -0,0 +1,44 @@
# DDRNet-23-slim model for segmentation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove this comment about specific of DDRnet

# Refer to here for optimal hyperparameters for this model: https://github.com/Deci-AI/super-gradients/blob/4797c974c7c445d12e2575c468848d9c3e04becd/src/super_gradients/recipes/cityscapes_ddrnet.yaml#L4

model:
name: ddrnet_segmentation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change model names to reflect the actual model - e.g. segmentation_light (same comment for other configs)

@@ -33,10 +33,10 @@ def __init__(
n_warmup_epochs: int = 4,
iou_type: IoUType = "giou",
reduction: Literal["sum", "mean"] = "mean",
class_loss_weight: float = 1.0,
iou_loss_weight: float = 2.5,
class_loss_weight: float = 0.5,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these changes based on the model source repo/paper?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are based on YOLOv8.

task_name: str | None = None
visualizer_params: Params = field(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we rather update the default classification visualizer params with these values?

@@ -14,19 +14,32 @@

@dataclass
class ClassificationModel(BasePredefinedModel):
backbone: str = "MicroNet"
variant: str | None = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make the variants more similar to how we handle them in e.g. EfficientRep? So make predefined model an actual class (instead of dataclass) with all the parameters being None by default (except variant="lite" by default). Then have VariantLiteral type ["lite", "heavy"] and a function that gets specific parameters based on variant and uses them if others aren't specifid (e.g. uses "ResNet-18" as backbone if variant="lite" and no backbone is set, but if backbone is explicitly set then use that one.

The same can be done for all ther predefined models.

@@ -14,36 +14,61 @@

@dataclass
class SegmentationModel(BasePredefinedModel):
backbone: str = "MicroNet"
variant: str | None = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keep backbone as parameter (same as with ClassificationModel) so it can be explicitly set.

@JSabadin JSabadin requested a review from klemen1999 October 1, 2024 15:14
Copy link
Collaborator

@kozlov721 kozlov721 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments. Additionally, the test_predefined_models in tests/integration/test_simple.py will need to be changed to use the new configs.
Otherwise LGTM

configs/classification_heavy_model.yaml Outdated Show resolved Hide resolved
configs/classification_light_model.yaml Outdated Show resolved Hide resolved
configs/example_export.yaml Outdated Show resolved Hide resolved
configs/classification_heavy_model.yaml Outdated Show resolved Hide resolved
@klemen1999
Copy link
Collaborator

We should also update the README under predefined_models accordingly

Copy link

codecov bot commented Oct 2, 2024

Codecov Report

Attention: Patch coverage is 94.89796% with 5 lines in your changes missing coverage. Please review.

Please upload report for BASE (dev@c97f921). Learn more about missing BASE report.

✅ All tests successful. No failed tests found.

Files with missing lines Patch % Lines
...ain/config/predefined_models/segmentation_model.py 93.54% 2 Missing ⚠️
...n/config/predefined_models/classification_model.py 95.00% 1 Missing ⚠️
..._train/config/predefined_models/detection_model.py 95.23% 1 Missing ⚠️
...nfig/predefined_models/keypoint_detection_model.py 95.65% 1 Missing ⚠️
Additional details and impacted files
@@          Coverage Diff           @@
##             dev      #85   +/-   ##
======================================
  Coverage       ?   96.66%           
======================================
  Files          ?      142           
  Lines          ?     6324           
  Branches       ?        0           
======================================
  Hits           ?     6113           
  Misses         ?      211           
  Partials       ?        0           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@kozlov721 kozlov721 changed the title Fix/lock down sample configs Lock Down Sample Configs Oct 2, 2024
@klemen1999 klemen1999 merged commit ca9ab66 into dev Oct 3, 2024
9 checks passed
@klemen1999 klemen1999 deleted the fix/lock-down-sample-configs branch October 3, 2024 08:21
kozlov721 added a commit that referenced this pull request Oct 9, 2024
Co-authored-by: klemen1999 <[email protected]>
Co-authored-by: Martin Kozlovsky <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fix Fixing a bug
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants