Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] Tokenizers - Classic #8357

Merged
merged 15 commits into from
Jan 3, 2025

Conversation

leanneeliatra
Copy link
Contributor

Description

Addition of the Tokenizer - Classic documentation, to the Analyzers section.

Issues Resolved

Part of #1483 addressed in this PR.

Version

All

Frontend features

n/a

Checklist

  • By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license and subject to the Developers Certificate of Origin.
    For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Copy link

Thank you for submitting your PR. The PR states are In progress (or Draft) -> Tech review -> Doc review -> Editorial review -> Merged.

Before you submit your PR for doc review, make sure the content is technically accurate. If you need help finding a tech reviewer, tag a maintainer.

When you're ready for doc review, tag the assignee of this PR. The doc reviewer may push edits to the PR directly or leave comments and editorial suggestions for you to address (let us know in a comment if you have a preference). The doc reviewer will arrange for an editorial review.

@leanneeliatra leanneeliatra marked this pull request as ready for review September 24, 2024 09:53
@kolchfa-aws kolchfa-aws assigned vagimeli and unassigned kolchfa-aws Sep 24, 2024
@vagimeli vagimeli added 3 - Tech review PR: Tech review in progress Needs SME Waiting on input from subject matter expert analyzers labels Sep 24, 2024
@vagimeli
Copy link
Contributor

@udabhas Will you review this PR for technical accuracy, or have a peer review it? Thank you.

By analyzing the text "Send an email to [email protected] or call 555-1234!", we can see the punctuation has been removed, while email and phone number

```
"Send", "an", "email", "to", "john.doe", "example.com", "or", "call", "555-1234"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I was wondering if it would be better to show entire output as there are different token types.
{"<ALPHANUM>", "<APOSTROPHE>", "<ACRONYM>", "<COMPANY>", "<EMAIL>", "<HOST>", "<NUM>", "<CJ>", "<ACRONYM_DEP>"}

{
  "tokens": [
    {
      "token": "Send",
      "start_offset": 0,
      "end_offset": 4,
      "type": "<ALPHANUM>",
      "position": 0
    },
    {
      "token": "an",
      "start_offset": 5,
      "end_offset": 7,
      "type": "<ALPHANUM>",
      "position": 1
    },
    {
      "token": "email",
      "start_offset": 8,
      "end_offset": 13,
      "type": "<ALPHANUM>",
      "position": 2
    },
    {
      "token": "to",
      "start_offset": 14,
      "end_offset": 16,
      "type": "<ALPHANUM>",
      "position": 3
    },
    {
      "token": "[email protected]",
      "start_offset": 17,
      "end_offset": 37,
      "type": "<EMAIL>",
      "position": 4
    },
    {
      "token": "or",
      "start_offset": 38,
      "end_offset": 40,
      "type": "<ALPHANUM>",
      "position": 5
    },
    {
      "token": "call",
      "start_offset": 41,
      "end_offset": 45,
      "type": "<ALPHANUM>",
      "position": 6
    },
    {
      "token": "555-1234",
      "start_offset": 46,
      "end_offset": 54,
      "type": "<NUM>",
      "position": 7
    }
  ]
}

Signed-off-by: Fanit Kolchina <[email protected]>
Signed-off-by: Fanit Kolchina <[email protected]>
Signed-off-by: Fanit Kolchina <[email protected]>
@kolchfa-aws kolchfa-aws assigned kolchfa-aws and unassigned vagimeli Jan 2, 2025
Copy link
Collaborator

@natebower natebower left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kolchfa-aws Please see my comments and changes and tag me for approval when complete. Thanks!

_analyzers/tokenizers/classic.md Outdated Show resolved Hide resolved
_analyzers/tokenizers/classic.md Outdated Show resolved Hide resolved
_analyzers/tokenizers/classic.md Outdated Show resolved Hide resolved
_analyzers/tokenizers/classic.md Outdated Show resolved Hide resolved
_analyzers/tokenizers/classic.md Outdated Show resolved Hide resolved
The `classic` tokenizer parses text as follows:

- **Punctuation**: Splits text at most punctuation marks and removes punctuation characters. Dots that aren't followed by spaces are treated as part of the token.
- **Hyphens**: Splits words at hyphens, except when a number is present. When a number is present in a token, the token is not split and is treated like a product number.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like a "product number" specifically?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

_analyzers/tokenizers/classic.md Outdated Show resolved Hide resolved
_analyzers/tokenizers/classic.md Outdated Show resolved Hide resolved
_analyzers/tokenizers/classic.md Outdated Show resolved Hide resolved
_analyzers/tokenizers/classic.md Outdated Show resolved Hide resolved
Copy link

@udabhas udabhas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

Co-authored-by: Nathan Bower <[email protected]>
Signed-off-by: kolchfa-aws <[email protected]>
@kolchfa-aws
Copy link
Collaborator

@natebower Comments addressed - please review again. Thanks!

Copy link
Collaborator

@natebower natebower left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kolchfa-aws LGTM with one minor deletion. Thanks!

@kolchfa-aws kolchfa-aws merged commit cdf2e30 into opensearch-project:main Jan 3, 2025
5 checks passed
@kolchfa-aws kolchfa-aws added the backport 2.18 PR: Backport label for 2.18 label Jan 3, 2025
opensearch-trigger-bot bot pushed a commit that referenced this pull request Jan 3, 2025
* adding in classic tokenizer page

Signed-off-by: [email protected] <[email protected]>

* removing unneeded whitespace

Signed-off-by: [email protected] <[email protected]>

* tokenizers does now have children

Signed-off-by: [email protected] <[email protected]>

* doc: small update for page numbers

Signed-off-by: [email protected] <[email protected]>

* format: updates to layout and formatting of page

Signed-off-by: [email protected] <[email protected]>

* Doc review

Signed-off-by: Fanit Kolchina <[email protected]>

* Change example

Signed-off-by: Fanit Kolchina <[email protected]>

* Small rewrite

Signed-off-by: Fanit Kolchina <[email protected]>

* Apply suggestions from code review

Co-authored-by: Nathan Bower <[email protected]>
Signed-off-by: kolchfa-aws <[email protected]>

* Update _analyzers/tokenizers/classic.md

Co-authored-by: Nathan Bower <[email protected]>
Signed-off-by: kolchfa-aws <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Fanit Kolchina <[email protected]>
Signed-off-by: kolchfa-aws <[email protected]>
Co-authored-by: Fanit Kolchina <[email protected]>
Co-authored-by: kolchfa-aws <[email protected]>
Co-authored-by: Nathan Bower <[email protected]>
(cherry picked from commit cdf2e30)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
github-actions bot pushed a commit that referenced this pull request Jan 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3 - Tech review PR: Tech review in progress analyzers backport 2.18 PR: Backport label for 2.18 Content gap Needs SME Waiting on input from subject matter expert
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants