Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Group llm integration test output using github ::group:: #546

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

renxida
Copy link
Contributor

@renxida renxida commented Nov 15, 2024

Use the ::group:: GitHub Workflow Command to make it easier to navigate integration logs.

Also adds a summary to easily look at what the generation results are.

@renxida renxida changed the title more granular shark-platform to highlight how long shortfin build takes shark-ai Integration CI Ergonomics Nov 15, 2024
@renxida renxida force-pushed the integration-test-granular-steps branch from 795e8f9 to aa729f0 Compare November 15, 2024 19:38
@renxida renxida changed the title shark-ai Integration CI Ergonomics Group llm integration test output using github ::group:: Nov 15, 2024
@renxida renxida marked this pull request as ready for review November 15, 2024 20:27
expected_output_prefix = "6 7 8"
logger.info("::group::Sending HTTP Generation Request")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this also supposed to create it's own group?

Looks like it needs an ::endgroup at the end, with logging the output:

INFO     app_tests.integration_tests.llm.cpu_llm_server_test:cpu_llm_server_test.py:87 ::group::Sending HTTP Generation Request
INFO     app_tests.integration_tests.llm.cpu_llm_server_test:cpu_llm_server_test.py:35 Generating request...
INFO     app_tests.integration_tests.llm.cpu_llm_server_test:cpu_llm_server_test.py:48 Prompt text:
INFO     app_tests.integration_tests.llm.cpu_llm_server_test:cpu_llm_server_test.py:49 1 2 3 4 5 
INFO     app_tests.integration_tests.llm.cpu_llm_server_test:cpu_llm_server_test.py:52 Generate endpoint status code: 200
INFO     app_tests.integration_tests.llm.cpu_llm_server_test:cpu_llm_server_test.py:54 Generated text:
INFO     app_tests.integration_tests.llm.cpu_llm_server_test:cpu_llm_server_test.py:96 6 7 8 9 10 11 12

@@ -24,6 +24,20 @@
logger = logging.getLogger(__name__)


def ghstartgroup(msg):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This and ghendgroup may be more shareable with model setup and the actual test if it was in utils.py.

We already import from utils in conftest and would make it so that we could easily reuse it in the integration test, instead of hardcoding a ::group::/::endgroup:: tag

@renxida renxida force-pushed the integration-test-granular-steps branch from ba28a00 to dedad87 Compare November 17, 2024 18:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants