Skip to content

Commit

Permalink
Allow specifying an ID when submitting an image query (#271)
Browse files Browse the repository at this point in the history
Adds optional parameter `image_query_id` to methods which allow
submitting an image query. Adds tests for new parameter. Updates
generated code based on new spec.

---------

Co-authored-by: Auto-format Bot <[email protected]>
  • Loading branch information
CoreyEWood and Auto-format Bot authored Nov 13, 2024
1 parent 9820c18 commit be6d1d3
Show file tree
Hide file tree
Showing 70 changed files with 139 additions and 110 deletions.
7 changes: 5 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -49,10 +49,13 @@ test-4edge: install ## Run tests against the prod API via the edge-endpoint (ne
${PYTEST} ${PROFILING_ARGS} ${TEST_ARGS} ${EDGE_FILTERS} test

test-local: install ## Run tests against a localhost API (needs GROUNDLIGHT_API_TOKEN and a local API server)
GROUNDLIGHT_ENDPOINT="http://localhost:8000/" ${PYTEST} ${TEST_ARGS} ${CLOUD_FILTERS} test
GROUNDLIGHT_ENDPOINT="http://localhost:8000/" $(MAKE) test

test-integ: install ## Run tests against the integ API server (needs GROUNDLIGHT_API_TOKEN)
GROUNDLIGHT_ENDPOINT="https://api.integ.groundlight.ai/" ${PYTEST} ${TEST_ARGS} ${CLOUD_FILTERS} test
GROUNDLIGHT_ENDPOINT="https://api.integ.groundlight.ai/" $(MAKE) test

test-dev: install ## Run tests against a dev API server (needs GROUNDLIGHT_API_TOKEN and properly configured dns-hostmap)
GROUNDLIGHT_ENDPOINT="https://api.dev.groundlight.ai/" $(MAKE) test

test-docs: install ## Run the example code and tests in our docs against the prod API (needs GROUNDLIGHT_API_TOKEN)
${PYTEST} --markdown-docs ${TEST_ARGS} docs README.md
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/building-applications/7-edge.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ and for communicating with the Groundlight cloud service.

To use the edge endpoint, simply configure the Groundlight SDK to use the edge endpoint's URL instead of the cloud endpoint.
All application logic will work seamlessly and unchanged with the Groundlight Edge Endpoint, except some ML answers will
return much faster locally. The only visible difference is that image queries answered at the edge endpoint will have the prefix `iqe_` instead of `iq_` for image queries answered in the cloud. `iqe_` stands for "image query edge". Edge-originated
image queries will not appear in the cloud dashboard.
return much faster locally. Image queries answered at the edge endpoint will not appear in the cloud dashboard unless
specifically configured to do so, in which case the edge prediction will not be reflected on the image query in the cloud.

## Configuring the Edge Endpoint

Expand Down
2 changes: 1 addition & 1 deletion generated/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Groundlight makes it simple to understand images. You can easily create computer

This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:

- API version: 0.18.1
- API version: 0.18.2
- Package version: 1.0.0
- Build package: org.openapitools.codegen.languages.PythonClientCodegen

Expand Down
2 changes: 1 addition & 1 deletion generated/docs/BinaryClassificationResult.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**label** | **str** | |
**confidence** | **float** | | [optional]
**source** | **str** | Source is optional to support edge v0.2 | [optional]
**source** | **str** | | [optional]
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]

[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
Expand Down
2 changes: 1 addition & 1 deletion generated/docs/CountingResult.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**count** | **int** | |
**confidence** | **float** | | [optional]
**source** | **str** | Source is optional to support edge v0.2 | [optional]
**source** | **str** | | [optional]
**greater_than_max** | **bool** | | [optional]
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]

Expand Down
6 changes: 4 additions & 2 deletions generated/docs/ImageQueriesApi.md
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ Name | Type | Description | Notes


Submit an image query against a detector. You must use `\"Content-Type: image/jpeg\"` for the image data. For example: ```Bash $ curl https://api.groundlight.ai/device-api/v1/image-queries?detector_id=det_abc123 \\ --header \"Content-Type: image/jpeg\" \\ --data-binary @path/to/filename.jpeg ```
Submit an image query against a detector. You must use `\"Content-Type: image/jpeg\"` or similar (image/png, image/webp, etc) for the image data. For example: ```Bash $ curl https://api.groundlight.ai/device-api/v1/image-queries?detector_id=det_abc123 \\ --header \"Content-Type: image/jpeg\" \\ --data-binary @path/to/filename.jpeg ```

### Example

Expand Down Expand Up @@ -283,6 +283,7 @@ with groundlight_openapi_client.ApiClient(configuration) as api_client:
api_instance = image_queries_api.ImageQueriesApi(api_client)
detector_id = "detector_id_example" # str | Choose a detector by its ID.
human_review = "human_review_example" # str | If set to `DEFAULT`, use the regular escalation logic (i.e., send the image query for human review if the ML model is not confident). If set to `ALWAYS`, always send the image query for human review even if the ML model is confident. If set to `NEVER`, never send the image query for human review even if the ML model is not confident. (optional)
image_query_id = "image_query_id_example" # str | The ID to assign to the created image query. (optional)
inspection_id = "inspection_id_example" # str | Associate the image query with an inspection. (optional)
metadata = "metadata_example" # str | A dictionary of custom key/value metadata to associate with the image query (limited to 1KB). (optional)
patience_time = 3.14 # float | How long to wait for a confident response. (optional)
Expand All @@ -299,7 +300,7 @@ with groundlight_openapi_client.ApiClient(configuration) as api_client:
# example passing only required values which don't have defaults set
# and optional values
try:
api_response = api_instance.submit_image_query(detector_id, human_review=human_review, inspection_id=inspection_id, metadata=metadata, patience_time=patience_time, want_async=want_async, body=body)
api_response = api_instance.submit_image_query(detector_id, human_review=human_review, image_query_id=image_query_id, inspection_id=inspection_id, metadata=metadata, patience_time=patience_time, want_async=want_async, body=body)
pprint(api_response)
except groundlight_openapi_client.ApiException as e:
print("Exception when calling ImageQueriesApi->submit_image_query: %s\n" % e)
Expand All @@ -312,6 +313,7 @@ Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**detector_id** | **str**| Choose a detector by its ID. |
**human_review** | **str**| If set to &#x60;DEFAULT&#x60;, use the regular escalation logic (i.e., send the image query for human review if the ML model is not confident). If set to &#x60;ALWAYS&#x60;, always send the image query for human review even if the ML model is confident. If set to &#x60;NEVER&#x60;, never send the image query for human review even if the ML model is not confident. | [optional]
**image_query_id** | **str**| The ID to assign to the created image query. | [optional]
**inspection_id** | **str**| Associate the image query with an inspection. | [optional]
**metadata** | **str**| A dictionary of custom key/value metadata to associate with the image query (limited to 1KB). | [optional]
**patience_time** | **float**| How long to wait for a confident response. | [optional]
Expand Down
2 changes: 1 addition & 1 deletion generated/docs/LabelValue.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ Name | Type | Description | Notes
**annotations_requested** | **[bool, date, datetime, dict, float, int, list, str, none_type]** | | [readonly]
**created_at** | **datetime** | | [readonly]
**detector_id** | **int, none_type** | | [readonly]
**source** | **bool, date, datetime, dict, float, int, list, str, none_type** | | [readonly]
**text** | **str, none_type** | Text annotations | [readonly]
**rois** | [**[ROI], none_type**](ROI.md) | | [optional]
**source** | **bool, date, datetime, dict, float, int, list, str, none_type** | | [optional] [readonly]
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]

[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
Expand Down
2 changes: 1 addition & 1 deletion generated/docs/MultiClassificationResult.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**label** | **str** | |
**confidence** | **float** | | [optional]
**source** | **str** | Source is optional to support edge v0.2 | [optional]
**source** | **str** | | [optional]
**any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional]

[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
Expand Down
2 changes: 1 addition & 1 deletion generated/groundlight_openapi_client/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
2 changes: 1 addition & 1 deletion generated/groundlight_openapi_client/api/actions_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
2 changes: 1 addition & 1 deletion generated/groundlight_openapi_client/api/detectors_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
9 changes: 7 additions & 2 deletions generated/groundlight_openapi_client/api/image_queries_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down Expand Up @@ -171,6 +171,7 @@ def __init__(self, api_client=None):
"all": [
"detector_id",
"human_review",
"image_query_id",
"inspection_id",
"metadata",
"patience_time",
Expand All @@ -190,6 +191,7 @@ def __init__(self, api_client=None):
"openapi_types": {
"detector_id": (str,),
"human_review": (str,),
"image_query_id": (str,),
"inspection_id": (str,),
"metadata": (str,),
"patience_time": (float,),
Expand All @@ -199,6 +201,7 @@ def __init__(self, api_client=None):
"attribute_map": {
"detector_id": "detector_id",
"human_review": "human_review",
"image_query_id": "image_query_id",
"inspection_id": "inspection_id",
"metadata": "metadata",
"patience_time": "patience_time",
Expand All @@ -207,6 +210,7 @@ def __init__(self, api_client=None):
"location_map": {
"detector_id": "query",
"human_review": "query",
"image_query_id": "query",
"inspection_id": "query",
"metadata": "query",
"patience_time": "query",
Expand Down Expand Up @@ -406,7 +410,7 @@ def list_image_queries(self, **kwargs):
def submit_image_query(self, detector_id, **kwargs):
"""submit_image_query # noqa: E501
Submit an image query against a detector. You must use `\"Content-Type: image/jpeg\"` for the image data. For example: ```Bash $ curl https://api.groundlight.ai/device-api/v1/image-queries?detector_id=det_abc123 \\ --header \"Content-Type: image/jpeg\" \\ --data-binary @path/to/filename.jpeg ``` # noqa: E501
Submit an image query against a detector. You must use `\"Content-Type: image/jpeg\"` or similar (image/png, image/webp, etc) for the image data. For example: ```Bash $ curl https://api.groundlight.ai/device-api/v1/image-queries?detector_id=det_abc123 \\ --header \"Content-Type: image/jpeg\" \\ --data-binary @path/to/filename.jpeg ``` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
Expand All @@ -418,6 +422,7 @@ def submit_image_query(self, detector_id, **kwargs):
Keyword Args:
human_review (str): If set to `DEFAULT`, use the regular escalation logic (i.e., send the image query for human review if the ML model is not confident). If set to `ALWAYS`, always send the image query for human review even if the ML model is confident. If set to `NEVER`, never send the image query for human review even if the ML model is not confident.. [optional]
image_query_id (str): The ID to assign to the created image query.. [optional]
inspection_id (str): Associate the image query with an inspection.. [optional]
metadata (str): A dictionary of custom key/value metadata to associate with the image query (limited to 1KB).. [optional]
patience_time (float): How long to wait for a confident response.. [optional]
Expand Down
2 changes: 1 addition & 1 deletion generated/groundlight_openapi_client/api/labels_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
2 changes: 1 addition & 1 deletion generated/groundlight_openapi_client/api/notes_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
2 changes: 1 addition & 1 deletion generated/groundlight_openapi_client/api/user_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
2 changes: 1 addition & 1 deletion generated/groundlight_openapi_client/api_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
4 changes: 2 additions & 2 deletions generated/groundlight_openapi_client/configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down Expand Up @@ -419,7 +419,7 @@ def to_debug_report(self):
"Python SDK Debug Report:\n"
"OS: {env}\n"
"Python Version: {pyversion}\n"
"Version of the API: 0.18.1\n"
"Version of the API: 0.18.2\n"
"SDK Package Version: 1.0.0".format(env=sys.platform, pyversion=sys.version)
)

Expand Down
2 changes: 1 addition & 1 deletion generated/groundlight_openapi_client/exceptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
2 changes: 1 addition & 1 deletion generated/groundlight_openapi_client/model/action.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
2 changes: 1 addition & 1 deletion generated/groundlight_openapi_client/model/action_list.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
2 changes: 1 addition & 1 deletion generated/groundlight_openapi_client/model/all_notes.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Groundlight makes it simple to understand images. You can easily create computer vision detectors just by describing what you want to know using natural language. # noqa: E501
The version of the OpenAPI document: 0.18.1
The version of the OpenAPI document: 0.18.2
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
Expand Down
Loading

0 comments on commit be6d1d3

Please sign in to comment.