diff --git a/docs/sphinx/source/query.ipynb b/docs/sphinx/source/query.ipynb index 6a6cb456..263747cd 100644 --- a/docs/sphinx/source/query.ipynb +++ b/docs/sphinx/source/query.ipynb @@ -72,9 +72,9 @@ "for Vespa query api request parameters.\n", "\n", "The YQL [userQuery()](https://docs.vespa.ai/en/reference/query-language-reference.html#userquery)\n", - "operator uses the query read from `query`. The query also specificies to use the app specific [bm25 rank profile](https://docs.vespa.ai/en/reference/bm25.html). The code\n", + "operator uses the query read from `query`. The query also specifies to use the app-specific [bm25 rank profile](https://docs.vespa.ai/en/reference/bm25.html). The code\n", "uses [context manager](https://realpython.com/python-with-statement/) `with session` statement to make sure that connection pools are released. If\n", - "you attempt to make multiple queries, this is important as each query will not have to setup new connections.\n" + "you attempt to make multiple queries, this is important as each query will not have to set up new connections.\n" ] }, { @@ -109,8 +109,8 @@ "source": [ "Alternatively, if the native [Vespa query parameter](https://docs.vespa.ai/en/reference/query-api-reference.html)\n", "contains \".\", which cannot be used as a `kwarg`, the parameters can be sent as HTTP POST with\n", - "the `body` argument. In this case `ranking` is an alias of `ranking.profile`, but using `ranking.profile` as a `**kwargs` argument is not allowed in python. This\n", - "will combine HTTP parameters with a HTTP POST body.\n" + "the `body` argument. In this case, `ranking` is an alias of `ranking.profile`, but using `ranking.profile` as a `**kwargs` argument is not allowed in python. This\n", + "will combine HTTP parameters with an HTTP POST body.\n" ] }, { @@ -177,7 +177,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Example of iterating over the returned hits obtained from `respone.hits`, extracting the `cord_uid` field:\n" + "Example of iterating over the returned hits obtained from `response.hits`, extracting the `cord_uid` field:\n" ] }, { @@ -250,14 +250,14 @@ "source": [ "## Query Performance\n", "\n", - "There are several things that impact end-to-end query performance\n", + "There are several things that impact end-to-end query performance:\n", "\n", "- HTTP layer performance, connecting handling, mututal TLS handshake and network round-trip latency\n", " - Make sure to re-use connections using context manager `with vespa.app.syncio():` to avoid setting up new connections\n", " for every unique query. See [http best practises](https://cloud.vespa.ai/en/http-best-practices)\n", - " - The size of the fields and the number of hits requested also greatly impacts network performance, a larger payload means higher latency.\n", - " - By adding `\"presentation.timing\": True` as a request parameter, the Vespa response includes the server side processing (also including reading the query\n", - " from network, but not delivering the result over the network). This can be handy to debug latency.\n", + " - The size of the fields and the number of hits requested also greatly impact network performance; a larger payload means higher latency.\n", + " - By adding `\"presentation.timing\": True` as a request parameter, the Vespa response includes the server-side processing (also including reading the query\n", + " from the network, but not delivering the result over the network). This can be handy for debugging latency.\n", "- Vespa performance, the features used inside the Vespa instance.\n" ] }, @@ -410,9 +410,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Running Queries asynchonously\n", + "## Running Queries asynchronously\n", "\n", - "If you want benchmark the capacity of a Vespa application, we suggest using [vespa-fbench](https://docs.vespa.ai/en/performance/vespa-benchmarking.html#vespa-fbench) that is a load generator tool which lets you measure throughput and latency with a predefined number of clients. Vespa-fbench is not Vespa-specific, and can be used to benchmark any HTTP service.\n", + "If you want to benchmark the capacity of a Vespa application, we suggest using [vespa-fbench](https://docs.vespa.ai/en/performance/vespa-benchmarking.html#vespa-fbench), which is a load generator tool that lets you measure throughput and latency with a predefined number of clients. Vespa-fbench is not Vespa-specific, and can be used to benchmark any HTTP service.\n", "\n", "Another option is to use the Open Source [k6](https://k6.io/) load testing tool.\n", "\n", @@ -710,10 +710,10 @@ "source": [ "## Error handling\n", "\n", - "Vespa's default query timeout is 500ms, PyVespa will by default retry up to 3 times for queries\n", + "Vespa's default query timeout is 500ms; Pyvespa will by default retry up to 3 times for queries\n", "that return response codes like 429, 500,503 and 504. A `VespaError` is raised if retries did not end up with success. In the following\n", - "example we set a very low [timeout](https://docs.vespa.ai/en/reference/query-api-reference.html#timeout) of `1ms` which will cause\n", - "Vespa to time out the request and it returns a 504 http error code. The underlaying error is wrapped in a `VespaError` with\n", + "example, we set a very low [timeout](https://docs.vespa.ai/en/reference/query-api-reference.html#timeout) of `1ms` which will cause\n", + "Vespa to time out the request, and it returns a 504 http error code. The underlying error is wrapped in a `VespaError` with\n", "the payload error message returned from Vespa:\n" ] }, @@ -742,7 +742,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In the following example we forgot to include the `query` parameter, but still reference it in the yql, this cause a bad client request response (400):\n" + "In the following example, we forgot to include the `query` parameter but still reference it in the yql. This causes a bad client request response (400):\n" ] }, {