Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Linear and Logistic Regression Parameters & Improve Documentation #8982

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

rithin-pullela-aws
Copy link

@rithin-pullela-aws rithin-pullela-aws commented Dec 23, 2024

  • Add comprehensive documentation for supported:
    • Optimizers (SIMPLE_SGD, LINEAR_DECAY_SGD, etc.)
    • Objective types (ABSOLUTE_LOSS, HUBER, SQUARED_LOSS)
    • Momentum types (STANDARD, NESTEROV)
  • Fix parameter name typos

Description

Describe what this change achieves.

Issues Resolved

Closes #8981

Version

List the OpenSearch version to which this PR applies, e.g. 2.14, 2.12--2.14, or all.

Frontend features

If you're submitting documentation for an OpenSearch Dashboards feature, add a video that shows how a user will interact with the UI step by step. A voiceover is optional.

Checklist

  • By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license and subject to the Developers Certificate of Origin.
    For more information on following Developer Certificate of Origin and signing off your commits, please check here.

- Add comprehensive documentation for supported:
  - Optimizers (SIMPLE_SGD, LINEAR_DECAY_SGD, etc.)
  - Objective types (ABSOLUTE_LOSS, HUBER, SQUARED_LOSS)
  - Momentum types (STANDARD, NESTEROV)
- Fix parameter name typos

Signed-off-by: rithin-pullela-aws <[email protected]>
Copy link

Thank you for submitting your PR. The PR states are In progress (or Draft) -> Tech review -> Doc review -> Editorial review -> Merged.

Before you submit your PR for doc review, make sure the content is technically accurate. If you need help finding a tech reviewer, tag a maintainer.

When you're ready for doc review, tag the assignee of this PR. The doc reviewer may push edits to the PR directly or leave comments and editorial suggestions for you to address (let us know in a comment if you have a preference). The doc reviewer will arrange for an editorial review.

@rithin-pullela-aws
Copy link
Author

Copy link
Collaborator

@kolchfa-aws kolchfa-aws left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, @rithin-pullela-aws! Could we add some more information for the user about these options?

@@ -412,23 +423,27 @@ The Localization algorithm can only be executed directly. Therefore, it cannot b

A classification algorithm, logistic regression models the probability of a discrete outcome given an input variable. In ML Commons, these classifications include both binary and multi-class. The most common is the binary classification, which takes two values, such as "true/false" or "yes/no", and predicts the outcome based on the values specified. Alternatively, a multi-class output can categorize different inputs based on type. This makes logistic regression most useful for situations where you are trying to determine how your inputs fit best into a specified category.

**Optimisers supported:** `SIMPLE_SGD`, `LINEAR_DECAY_SGD`, `SQRT_DECAY_SGD`, `ADA_GRAD`, `ADA_DELTA`, `ADAM`, and `RMS_PROP`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add a description for each optimizer, objective, and momentum type so the user can choose appropriately?

| `momentumType` | String | The Stochastic Gradient Descent (SGD) momentum that helps accelerate gradient vectors in the right direction, leading to faster convergence between vectors. | `STANDARD` |
| `optimizerType` | String | The optimizer used in the model. | `AdaGrad` |
| `decay_rate` | Double | The Root Mean Squared Propagation (RMSProp). | `0.9` |
| `momentum_type` | String | The Stochastic Gradient Descent (SGD) momentum that helps accelerate gradient vectors in the right direction, leading to faster convergence between vectors. | `STANDARD` |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
| `momentum_type` | String | The Stochastic Gradient Descent (SGD) momentum that helps accelerate gradient vectors in the right direction, leading to faster convergence between vectors. | `STANDARD` |
| `momentum_type` | String | The Stochastic Gradient Descent (SGD) momentum that helps accelerate gradient vectors in the correct direction, leading to faster convergence between vectors. | `STANDARD` |

| `optimizerType` | String | The optimizer used in the model. | `AdaGrad` |
| `decay_rate` | Double | The Root Mean Squared Propagation (RMSProp). | `0.9` |
| `momentum_type` | String | The Stochastic Gradient Descent (SGD) momentum that helps accelerate gradient vectors in the right direction, leading to faster convergence between vectors. | `STANDARD` |
| `optimiser` | String | The optimizer used in the model. | `ADA_GRAD` |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the correct parameter name spelling? The American spelling is "optimizer".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[DOC] Update Linear and Logistic Regression Parameters & Improve Documentation
2 participants