Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Momo Job default Spark 3.4 #3679

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Merge branch 'main' into vaibhj/deprecatespark33

2589e9d
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Open

Momo Job default Spark 3.4 #3679

Merge branch 'main' into vaibhj/deprecatespark33
2589e9d
Select commit
Loading
Failed to load commit list.
GitHub Actions / Test Results for model-monitoring-ci failed Dec 19, 2024 in 0s

20 fail, 15 skipped, 343 pass in 52m 55s

378 tests  ±0   343 ✅ ±0   52m 55s ⏱️ + 4m 58s
  6 suites ±0    15 💤 ±0 
  6 files   ±0    20 ❌ ±0 

Results for commit 2589e9d. ± Comparison against earlier commit 76c91d2.

Annotations

Check warning on line 0 in tests.e2e.test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_run_successful_with_datatype_override (tests.e2e.test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor) failed

results/group_1_junit3.xml [took 15m 48s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor object at 0x7fad12549af0>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7face50b4550>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7face539f310>
download_job_output = <function download_job_output.<locals>.download_output at 0x7face539f820>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7face539f550>
test_suite_name = '3679_merge'

    def test_monitoring_run_successful_with_datatype_override(
        self, ml_client: MLClient, get_component, download_job_output,
        submit_pipeline_job, test_suite_name
    ):
        """Test the happy path scenario with datatype override."""
        pipeline_job = _submit_data_drift_model_monitor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_IRIS_BASELINE_DATA_TYPE_OVERRIDE,
            DATA_ASSET_IRIS_PREPROCESSED_MODEL_INPUTS_TYPE_OVERRIDE,
            "target",
            "sepal_width",
            "petal_length"
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_data_drift_signal_monitor_e2e.py:202: AssertionError

Check warning on line 0 in tests.e2e.test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_run_use_defaults_data_has_no_drift_successful (tests.e2e.test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor) failed

results/group_1_junit3.xml [took 17m 37s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor object at 0x7f739c7fedc0>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7f7366cf4040>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7f7366d8f160>
download_job_output = <function download_job_output.<locals>.download_output at 0x7f7366d8f670>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7f7366d8f3a0>
test_suite_name = '3679_merge'

    def test_monitoring_run_use_defaults_data_has_no_drift_successful(
        self, ml_client: MLClient, get_component, download_job_output,
        submit_pipeline_job, test_suite_name
    ):
        """Test the happy path scenario where the data has no drift."""
        pipeline_job = _submit_data_drift_model_monitor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_IRIS_BASELINE_DATA,
            DATA_ASSET_IRIS_PREPROCESSED_MODEL_INPUTS_NO_DRIFT,
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_data_drift_signal_monitor_e2e.py:97: AssertionError

Check warning on line 0 in tests.e2e.test_data_joiner_e2e.TestDataJoinerE2E

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_data_joiner_successful (tests.e2e.test_data_joiner_e2e.TestDataJoinerE2E) failed

results/group_1_junit3.xml [took 13m 51s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_data_joiner_e2e.TestDataJoinerE2E object at 0x7fc73058fa90>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fc6fad63280>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fc6facb9040>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fc6facb9280>
test_suite_name = '3679_merge'

    def test_data_joiner_successful(
        self, ml_client: MLClient, get_component, submit_pipeline_job, test_suite_name
    ):
        """Test the happy path scenario for data joiner."""
        pipeline_job = _submit_data_joiner_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_IRIS_PREPROCESSED_MODEL_INPUTS_WITH_JOIN_COLUMN,
            DATA_ASSET_MODEL_INPUTS_JOIN_COLUMN_NAME,
            DATA_ASSET_IRIS_PREPROCESSED_MODEL_OUTPUTS_WITH_JOIN_COLUMN,
            DATA_ASSET_MODEL_OUTPUTS_JOIN_COLUMN_NAME
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_data_joiner_e2e.py:91: AssertionError

Check warning on line 0 in tests.e2e.test_data_quality_signal_monitor_e2e.TestDataQualityModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_run_use_defaults_data_has_no_drift_successful (tests.e2e.test_data_quality_signal_monitor_e2e.TestDataQualityModelMonitor) failed

results/group_1_junit3.xml [took 17m 35s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_data_quality_signal_monitor_e2e.TestDataQualityModelMonitor object at 0x7fc5ac98ff40>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fc576d2e340>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fc576b5e1f0>
download_job_output = <function download_job_output.<locals>.download_output at 0x7fc576b5e700>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fc576b5e430>
test_suite_name = '3679_merge'

    def test_monitoring_run_use_defaults_data_has_no_drift_successful(
        self, ml_client: MLClient, get_component, download_job_output, submit_pipeline_job, test_suite_name
    ):
        """Test the happy path scenario where the data has drift and default settings are used."""
        pipeline_job = _submit_data_quality_signal_monitor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_IRIS_BASELINE_DATA,
            DATA_ASSET_IRIS_PREPROCESSED_MODEL_INPUTS_NO_DRIFT,
            "target",
            "TopNByAttribution",
            "3"
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_data_quality_signal_monitor_e2e.py:94: AssertionError

Check warning on line 0 in tests.e2e.test_data_quality_signal_monitor_e2e.TestDataQualityModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_run_successful_with_timestamp_data (tests.e2e.test_data_quality_signal_monitor_e2e.TestDataQualityModelMonitor) failed

results/group_1_junit3.xml [took 18m 35s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_data_quality_signal_monitor_e2e.TestDataQualityModelMonitor object at 0x7fa4c9352610>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fa4a0cd7820>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fa4a0cde310>
download_job_output = <function download_job_output.<locals>.download_output at 0x7fa4a0cde820>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fa4a0cde550>
test_suite_name = '3679_merge'

    def test_monitoring_run_successful_with_timestamp_data(
        self, ml_client: MLClient, get_component, download_job_output, submit_pipeline_job, test_suite_name
    ):
        """Test the happy path scenario with timestamp data."""
        pipeline_job = _submit_data_quality_signal_monitor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_WITH_TIMESTAMP_BASELINE_DATA,
            DATA_ASSET_WITH_TIMESTAMP_PRODUCTION_DATA,
            "DEFAULT_NEXT_MONTH",
            "TopNByAttribution",
            "3"
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_data_quality_signal_monitor_e2e.py:132: AssertionError

Check warning on line 0 in tests.e2e.test_create_manifest_e2e.TestCreateManifestE2E

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_run_use_defaults_data_has_no_drift_successful (tests.e2e.test_create_manifest_e2e.TestCreateManifestE2E) failed

results/group_1_junit3.xml [took 4m 35s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_create_manifest_e2e.TestCreateManifestE2E object at 0x7fdf75dfe7f0>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fdf4935af40>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fdf49372f70>
download_job_output = <function download_job_output.<locals>.download_output at 0x7fdf377a9670>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fdf377a9af0>
test_suite_name = '3679_merge'

    def test_monitoring_run_use_defaults_data_has_no_drift_successful(
        self, ml_client: MLClient, get_component, download_job_output, submit_pipeline_job, test_suite_name
    ):
        """Test the happy path scenario where the data has drift and default settings are used."""
        pipeline_job = _submit_data_drift_and_create_manifest_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_IRIS_BASELINE_DATA,
            DATA_ASSET_IRIS_MODEL_INPUTS_WITH_DRIFT,
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_create_manifest_e2e.py:104: AssertionError

Check warning on line 0 in tests.e2e.test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_run_successful_with_timestamp_data (tests.e2e.test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor) failed

results/group_1_junit3.xml [took 4m 35s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor object at 0x7fad12543a00>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7face50b4550>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7face539f310>
download_job_output = <function download_job_output.<locals>.download_output at 0x7face472d550>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7face539f550>
test_suite_name = '3679_merge'

    def test_monitoring_run_successful_with_timestamp_data(
        self, ml_client: MLClient, get_component, download_job_output,
        submit_pipeline_job, test_suite_name
    ):
        """Test the happy path scenario with timestamp data."""
        pipeline_job = _submit_data_drift_model_monitor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_WITH_TIMESTAMP_BASELINE_DATA,
            DATA_ASSET_WITH_TIMESTAMP_PRODUCTION_DATA,
            "DEFAULT_NEXT_MONTH"
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_data_drift_signal_monitor_e2e.py:219: AssertionError

Check warning on line 0 in tests.e2e.test_feature_attribution_drift_signal_monitor_e2e.TestFeatureAttributionDriftModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_featureattributiondrift_with_preprocessor_and_datajoiner_successful (tests.e2e.test_feature_attribution_drift_signal_monitor_e2e.TestFeatureAttributionDriftModelMonitor) failed

results/group_1_junit3.xml [took 2m 54s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_feature_attribution_drift_signal_monitor_e2e.TestFeatureAttributionDriftModelMonitor object at 0x7fc73059c0a0>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fc6fad63280>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fc6facb9040>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fc6fa919a60>
test_suite_name = '3679_merge'

    def test_featureattributiondrift_with_preprocessor_and_datajoiner_successful(
        self, ml_client: MLClient, get_component, submit_pipeline_job, test_suite_name
    ):
        """Test preprocessor and data joiner with FAD signal."""
        pipeline_job = _submit_feature_attribution_drift_with_preprocessor_and_datajoiner(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_IRIS_MODEL_INPUTS_NO_DRIFT,
            DATA_ASSET_IRIS_MODEL_OUTPUTS_NO_DRIFT,
            DATA_ASSET_IRIS_BASELINE_DATA
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_feature_attribution_drift_signal_monitor_e2e.py:169: AssertionError

Check warning on line 0 in tests.e2e.test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_run_int_single_distinct_value_histogram (tests.e2e.test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor) failed

results/group_1_junit3.xml [took 4m 36s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor object at 0x7ff9ecf898e0>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7ff9bc986910>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7ff9bc8d51f0>
download_job_output = <function download_job_output.<locals>.download_output at 0x7ff9aff301f0>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7ff9bc8d5430>
test_suite_name = '3679_merge'

    def test_monitoring_run_int_single_distinct_value_histogram(
        self, ml_client: MLClient, get_component, download_job_output,
        submit_pipeline_job, test_suite_name
    ):
        """Test the scenario where the production data has a column with only one distinct value."""
        pipeline_job = _submit_data_drift_model_monitor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_IRIS_BASELINE_INT_SINGLE_VALUE_HISTOGRAM,
            DATA_ASSET_IRIS_PREPROCESSED_MODEL_INPUTS_INT_SINGLE_VALUE_HISTOGRAM,
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_data_drift_signal_monitor_e2e.py:183: AssertionError

Check warning on line 0 in tests.e2e.test_data_quality_signal_monitor_e2e.TestDataQualityModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_run_add_more_valid_datatype_data_successful (tests.e2e.test_data_quality_signal_monitor_e2e.TestDataQualityModelMonitor) failed

results/group_1_junit3.xml [took 5m 32s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_data_quality_signal_monitor_e2e.TestDataQualityModelMonitor object at 0x7fdf75d98a90>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fdf4935af40>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fdf49372f70>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fdf37753160>
test_suite_name = '3679_merge'

    def test_monitoring_run_add_more_valid_datatype_data_successful(
        self, ml_client: MLClient, get_component, submit_pipeline_job, test_suite_name
    ):
        """Test the scenario where the datatype contains timestamp and boolean."""
        # The test case does not choose the target_column because of a bug in feature_importance
        # component did not support timestamp type. So we do not select target_column for now for the test
        pipeline_job = _submit_data_quality_signal_monitor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_VALID_DATATYPE,
            DATA_ASSET_VALID_DATATYPE,
            None,
            "All",
            None
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_data_quality_signal_monitor_e2e.py:172: AssertionError

Check warning on line 0 in tests.e2e.test_data_quality_signal_monitor_e2e.TestDataQualityModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_run_successful_with_datatype_override (tests.e2e.test_data_quality_signal_monitor_e2e.TestDataQualityModelMonitor) failed

results/group_1_junit3.xml [took 6m 33s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_data_quality_signal_monitor_e2e.TestDataQualityModelMonitor object at 0x7fc5ac98f970>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fc576d2e340>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fc576b5e1f0>
download_job_output = <function download_job_output.<locals>.download_output at 0x7fc576982790>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fc576b5e430>
test_suite_name = '3679_merge'

    def test_monitoring_run_successful_with_datatype_override(
        self, ml_client: MLClient, get_component, download_job_output, submit_pipeline_job, test_suite_name
    ):
        """Test the happy path scenario with datatype override."""
        pipeline_job = _submit_data_quality_signal_monitor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_IRIS_BASELINE_DATA_TYPE_OVERRIDE,
            DATA_ASSET_IRIS_PREPROCESSED_MODEL_INPUTS_TYPE_OVERRIDE,
            "target",
            "TopNByAttribution",
            "3",
            "sepal_width",
            "petal_length"
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_data_quality_signal_monitor_e2e.py:114: AssertionError

Check warning on line 0 in tests.e2e.test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_run_use_int_data_has_no_drift_successful (tests.e2e.test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor) failed

results/group_1_junit3.xml [took 4m 34s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_data_drift_signal_monitor_e2e.TestDataDriftModelMonitor object at 0x7fd180f4de20>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fd143dbbaf0>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fd143ab2ee0>
download_job_output = <function download_job_output.<locals>.download_output at 0x7fd14398d160>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fd143b6e160>
test_suite_name = '3679_merge'

    def test_monitoring_run_use_int_data_has_no_drift_successful(
        self, ml_client: MLClient, get_component, download_job_output,
        submit_pipeline_job, test_suite_name
    ):
        """Test the happy path scenario with int data."""
        pipeline_job = _submit_data_drift_model_monitor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_IRIS_BASELINE_INT_DATA_TYPE,
            DATA_ASSET_IRIS_PREPROCESSED_MODEL_INPUTS_NO_DRIFT_INT_DATA
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_data_drift_signal_monitor_e2e.py:149: AssertionError

Check warning on line 0 in tests.e2e.test_genai_preprocessor_e2e.TestGenAIPreprocessorE2E

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_mdc_preprocessor_successful[azureml:uri_folder_genai_raw_log_model_inputs:1-2024-02-05T00:00:00Z-2024-02-06T00:00:00Z] (tests.e2e.test_genai_preprocessor_e2e.TestGenAIPreprocessorE2E) failed

results/group_1_junit3.xml [took 2m 53s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_genai_preprocessor_e2e.TestGenAIPreprocessorE2E object at 0x7fc73059ce20>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fc6fad63280>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fc6facb9040>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fc6fa8f58b0>
test_suite_name = '3679_merge'
input_data = 'azureml:uri_folder_genai_raw_log_model_inputs:1'
start_time = '2024-02-05T00:00:00Z', end_time = '2024-02-06T00:00:00Z'

    @pytest.mark.parametrize(
        "input_data, start_time, end_time",
        [
            # traditional model
            (DATA_ASSET_GENAI_RAW_LOG_MODEL_INPUTS, "2024-02-05T00:00:00Z", "2024-02-06T00:00:00Z"),
            # log with events
            (DATA_ASSET_GENAI_RAW_LOG_WITH_EVENTS, "2024-04-08T00:00:00Z", "2024-04-10T20:00:00Z")
        ]
    )
    def test_mdc_preprocessor_successful(
        self, ml_client: MLClient, get_component, submit_pipeline_job, test_suite_name, input_data,
        start_time, end_time
    ):
        """Test the happy path scenario for Gen AI preprocessor."""
        pipeline_job = _submit_genai_preprocessor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            input_data,
            start_time,
            end_time
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_genai_preprocessor_e2e.py:93: AssertionError

Check warning on line 0 in tests.e2e.test_genai_preprocessor_e2e.TestGenAIPreprocessorE2E

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_mdc_preprocessor_successful[azureml:uri_folder_genai_raw_log_with_events:1-2024-04-08T00:00:00Z-2024-04-10T20:00:00Z] (tests.e2e.test_genai_preprocessor_e2e.TestGenAIPreprocessorE2E) failed

results/group_1_junit3.xml [took 1m 54s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_genai_preprocessor_e2e.TestGenAIPreprocessorE2E object at 0x7ff9ecf9cc70>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7ff9bc986910>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7ff9bc8d51f0>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7ff9aff305e0>
test_suite_name = '3679_merge'
input_data = 'azureml:uri_folder_genai_raw_log_with_events:1'
start_time = '2024-04-08T00:00:00Z', end_time = '2024-04-10T20:00:00Z'

    @pytest.mark.parametrize(
        "input_data, start_time, end_time",
        [
            # traditional model
            (DATA_ASSET_GENAI_RAW_LOG_MODEL_INPUTS, "2024-02-05T00:00:00Z", "2024-02-06T00:00:00Z"),
            # log with events
            (DATA_ASSET_GENAI_RAW_LOG_WITH_EVENTS, "2024-04-08T00:00:00Z", "2024-04-10T20:00:00Z")
        ]
    )
    def test_mdc_preprocessor_successful(
        self, ml_client: MLClient, get_component, submit_pipeline_job, test_suite_name, input_data,
        start_time, end_time
    ):
        """Test the happy path scenario for Gen AI preprocessor."""
        pipeline_job = _submit_genai_preprocessor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            input_data,
            start_time,
            end_time
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_genai_preprocessor_e2e.py:93: AssertionError

Check warning on line 0 in tests.e2e.test_model_data_collector_preprocessor_e2e.TestMDCPreprocessorE2E

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_mdc_preprocessor_successful[azureml:uri_folder_llm_model_inputs:1-2023-10-24T22:00:00Z-2023-10-24T23:00:00Z] (tests.e2e.test_model_data_collector_preprocessor_e2e.TestMDCPreprocessorE2E) failed

results/group_1_junit3.xml [took 5m 16s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_model_data_collector_preprocessor_e2e.TestMDCPreprocessorE2E object at 0x7f739c7ad220>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7f7366cf4040>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7f7366d8f160>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7f73669ff160>
test_suite_name = '3679_merge'
input_data = 'azureml:uri_folder_llm_model_inputs:1'
start_time = '2023-10-24T22:00:00Z', end_time = '2023-10-24T23:00:00Z'

    @pytest.mark.parametrize(
        "input_data, start_time, end_time",
        [
            # traditional model
            (DATA_ASSET_IRIS_MODEL_INPUTS_WITH_DRIFT, "2023-01-29T00:00:00Z", "2023-02-03T00:00:00Z"),
            # LLM model
            (DATA_ASSET_LLM_INPUTS, "2023-10-24T22:00:00Z", "2023-10-24T23:00:00Z")
        ]
    )
    def test_mdc_preprocessor_successful(
        self, ml_client: MLClient, get_component, submit_pipeline_job, test_suite_name, input_data,
        start_time, end_time
    ):
        """Test the happy path scenario for MDC preprocessor."""
        for extract_correlation_id in [True, False]:
            pipeline_job = _submit_mdc_preprocessor_job(
                submit_pipeline_job,
                ml_client,
                get_component,
                test_suite_name,
                extract_correlation_id,
                input_data,
                start_time,
                end_time
            )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_model_data_collector_preprocessor_e2e.py:92: AssertionError

Check warning on line 0 in tests.e2e.test_model_performance_e2e.TestModelPerformanceModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_classification_successful (tests.e2e.test_model_performance_e2e.TestModelPerformanceModelMonitor) failed

results/group_1_junit3.xml [took 17m 42s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_model_performance_e2e.TestModelPerformanceModelMonitor object at 0x7fc7305ade20>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fc6fad63280>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fc6facb9040>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fc6fa861310>
test_suite_name = '3679_merge'

    def test_monitoring_classification_successful(
        self, ml_client: MLClient, get_component, submit_pipeline_job, test_suite_name
    ):
        """Test model performance on classification model."""
        pipeline_job = _submit_model_performance_signal_monitor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            "tabular-classification",
            "classification-targetvalue",
            DATA_ASSET_MODEL_PERFORMANCE_PRODUCTION_DATA,
            "classification",
            None,
            None,
            0.1,
            0.1,
            0.1,
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_model_performance_e2e.py:112: AssertionError

Check warning on line 0 in tests.e2e.test_model_monitor_metric_outputter_e2e.TestModelMonitorMetricOutputterE2E

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_mdc_preprocessor_successful (tests.e2e.test_model_monitor_metric_outputter_e2e.TestModelMonitorMetricOutputterE2E) failed

results/group_1_junit3.xml [took 2m 38s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_model_monitor_metric_outputter_e2e.TestModelMonitorMetricOutputterE2E object at 0x7fd180f95d00>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fd143dbbaf0>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fd143ab2ee0>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fd14398d4c0>
test_suite_name = '3679_merge'

    def test_mdc_preprocessor_successful(
        self, ml_client: MLClient, get_component, submit_pipeline_job, test_suite_name
    ):
        """Test the happy path scenario for MDC preprocessor."""
        pipeline_job = _submit_metric_outputter_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_MLTABLE_DATA_DRIFT_SIGNAL_OUTPUT,
            DATA_ASSET_MLTABLE_SAMPLES_INDEX_OUTPUT,
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_model_monitor_metric_outputter_e2e.py:78: AssertionError

Check warning on line 0 in tests.e2e.test_prediction_drift_signal_monitor_e2e.TestPredictionDriftModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_run_use_defaults_data_has_no_drift_successful (tests.e2e.test_prediction_drift_signal_monitor_e2e.TestPredictionDriftModelMonitor) failed

results/group_1_junit3.xml [took 1m 56s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_prediction_drift_signal_monitor_e2e.TestPredictionDriftModelMonitor object at 0x7fad12571e20>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7face50b4550>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7face539f310>
download_job_output = <function download_job_output.<locals>.download_output at 0x7face46881f0>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7face4688040>
test_suite_name = '3679_merge'

    def test_monitoring_run_use_defaults_data_has_no_drift_successful(
        self, ml_client: MLClient, get_component, download_job_output, submit_pipeline_job, test_suite_name
    ):
        """Test the happy path scenario where the data has drift and default settings are used."""
        pipeline_job = _submit_prediction_drift_model_monitor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            DATA_ASSET_IRIS_BASELINE_DATA,
            DATA_ASSET_IRIS_PREPROCESSED_MODEL_OUTPUTS_NO_DRIFT,
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_prediction_drift_signal_monitor_e2e.py:77: AssertionError

Check warning on line 0 in tests.e2e.test_model_data_collector_preprocessor_e2e.TestMDCPreprocessorE2E

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_mdc_preprocessor_successful[azureml:uri_folder_iris_model_inputs_with_drift:1-2023-01-29T00:00:00Z-2023-02-03T00:00:00Z] (tests.e2e.test_model_data_collector_preprocessor_e2e.TestMDCPreprocessorE2E) failed

results/group_1_junit3.xml [took 3m 41s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_model_data_collector_preprocessor_e2e.TestMDCPreprocessorE2E object at 0x7fc5ac9a8fd0>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fc576d2e340>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fc576b5e1f0>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fc5769824c0>
test_suite_name = '3679_merge'
input_data = 'azureml:uri_folder_iris_model_inputs_with_drift:1'
start_time = '2023-01-29T00:00:00Z', end_time = '2023-02-03T00:00:00Z'

    @pytest.mark.parametrize(
        "input_data, start_time, end_time",
        [
            # traditional model
            (DATA_ASSET_IRIS_MODEL_INPUTS_WITH_DRIFT, "2023-01-29T00:00:00Z", "2023-02-03T00:00:00Z"),
            # LLM model
            (DATA_ASSET_LLM_INPUTS, "2023-10-24T22:00:00Z", "2023-10-24T23:00:00Z")
        ]
    )
    def test_mdc_preprocessor_successful(
        self, ml_client: MLClient, get_component, submit_pipeline_job, test_suite_name, input_data,
        start_time, end_time
    ):
        """Test the happy path scenario for MDC preprocessor."""
        for extract_correlation_id in [True, False]:
            pipeline_job = _submit_mdc_preprocessor_job(
                submit_pipeline_job,
                ml_client,
                get_component,
                test_suite_name,
                extract_correlation_id,
                input_data,
                start_time,
                end_time
            )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_model_data_collector_preprocessor_e2e.py:92: AssertionError

Check warning on line 0 in tests.e2e.test_model_performance_e2e.TestModelPerformanceModelMonitor

See this annotation in the file changed.

@github-actions github-actions / Test Results for model-monitoring-ci

test_monitoring_regression_successful (tests.e2e.test_model_performance_e2e.TestModelPerformanceModelMonitor) failed

results/group_1_junit3.xml [took 14m 27s]
Raw output
AssertionError: assert 'Failed' == 'Completed'
  - Completed
  + Failed
self = <test_model_performance_e2e.TestModelPerformanceModelMonitor object at 0x7fa4c936ffa0>
ml_client = MLClient(credential=<azure.identity._credentials.azure_cli.AzureCliCredential object at 0x7fa4a0cd7820>,
         subs...b2a3ee45d71f,
         resource_group_name=azureml-assets-20241219,
         workspace_name=azureml-assets-ws-20241219)
get_component = <function get_component.<locals>._get at 0x7fa4a0cde310>
submit_pipeline_job = <function submit_pipeline_job.<locals>._submit_job at 0x7fa48bf3bd30>
test_suite_name = '3679_merge'

    def test_monitoring_regression_successful(
        self, ml_client: MLClient, get_component, submit_pipeline_job, test_suite_name
    ):
        """Test model performance on regression model."""
        pipeline_job = _submit_model_performance_signal_monitor_job(
            submit_pipeline_job,
            ml_client,
            get_component,
            test_suite_name,
            "tabular-regression",
            "regression-targetvalue",
            DATA_ASSET_MODEL_PERFORMANCE_PRODUCTION_DATA,
            "regression",
            0.1,
            0.1
        )
    
>       assert pipeline_job.status == "Completed"
E       AssertionError: assert 'Failed' == 'Completed'
E         - Completed
E         + Failed

assets/model_monitoring/components/tests/e2e/test_model_performance_e2e.py:90: AssertionError