From 1e15629e80e27dd866a2df22e40c470183451dbe Mon Sep 17 00:00:00 2001 From: Abhishek Choudhary Date: Wed, 10 Jan 2024 08:24:05 +0545 Subject: [PATCH 01/20] docs: add info about env var usage (#10755) --- docs/en/latest/deployment-modes.md | 17 +++++++++++++++++ docs/en/latest/profile.md | 2 ++ 2 files changed, 19 insertions(+) diff --git a/docs/en/latest/deployment-modes.md b/docs/en/latest/deployment-modes.md index edf705a895a9..90ae0cded0ea 100644 --- a/docs/en/latest/deployment-modes.md +++ b/docs/en/latest/deployment-modes.md @@ -151,6 +151,23 @@ routes: *WARNING*: APISIX will not load the rules into memory from file `conf/apisix.yaml` if there is no `#END` at the end. +Environment variables can also be used like so: + +```yaml +routes: + - + uri: /hello + upstream: + nodes: + "${{UPSTREAM_ADDR}}": 1 + type: roundrobin +#END +``` + +*WARNING*: When using docker to deploy APISIX in standalone mode. New environment variables added to `apisix.yaml` while APISIX has been initialized will only take effect after a reload. + +More information about using environment variables can be found [here](./admin-api.md#using-environment-variables). + ### How to configure Route Single Route: diff --git a/docs/en/latest/profile.md b/docs/en/latest/profile.md index 63913a43839d..8c0eaa311805 100644 --- a/docs/en/latest/profile.md +++ b/docs/en/latest/profile.md @@ -103,6 +103,8 @@ routes: Initialize and start APISIX in standalone mode, requests to `/anything` should now be forwarded to `httpbin.org:80/anything`. +*WARNING*: When using docker to deploy APISIX in standalone mode. New environment variables added to `apisix.yaml` while APISIX has been initialized will only take effect after a reload. + ## Using the `APISIX_PROFILE` environment variable If you have multiple configuration changes for multiple environments, it might be better to have a different configuration file for each. From 131f20a58c9fd096c0329906e9b894fc57a8f8c6 Mon Sep 17 00:00:00 2001 From: baiyun <337531158@qq.com> Date: Wed, 10 Jan 2024 13:44:53 +0800 Subject: [PATCH 02/20] docs: Add default log format for each logger plugin (#10764) --- docs/en/latest/plugins/clickhouse-logger.md | 43 ++++++++++++++ .../en/latest/plugins/elasticsearch-logger.md | 40 +++++++++++++ docs/en/latest/plugins/error-log-logger.md | 6 ++ docs/en/latest/plugins/file-logger.md | 44 ++++++++++++++ .../en/latest/plugins/google-cloud-logging.md | 30 ++++++++++ docs/en/latest/plugins/http-logger.md | 44 ++++++++++++++ docs/en/latest/plugins/loggly.md | 6 ++ docs/en/latest/plugins/loki-logger.md | 42 ++++++++++++++ docs/en/latest/plugins/rocketmq-logger.md | 55 ++++++++++++++++++ docs/en/latest/plugins/skywalking-logger.md | 57 +++++++++++++++++++ docs/en/latest/plugins/sls-logger.md | 27 +++++++++ docs/en/latest/plugins/splunk-hec-logging.md | 31 ++++++++++ docs/en/latest/plugins/syslog.md | 6 ++ docs/en/latest/plugins/tcp-logger.md | 40 +++++++++++++ docs/en/latest/plugins/tencent-cloud-cls.md | 40 +++++++++++++ docs/en/latest/plugins/udp-logger.md | 40 +++++++++++++ docs/zh/latest/plugins/clickhouse-logger.md | 43 ++++++++++++++ .../zh/latest/plugins/elasticsearch-logger.md | 40 +++++++++++++ docs/zh/latest/plugins/error-log-logger.md | 6 ++ docs/zh/latest/plugins/file-logger.md | 44 ++++++++++++++ .../zh/latest/plugins/google-cloud-logging.md | 30 ++++++++++ docs/zh/latest/plugins/http-logger.md | 44 ++++++++++++++ docs/zh/latest/plugins/loggly.md | 6 ++ docs/zh/latest/plugins/loki-logger.md | 42 ++++++++++++++ docs/zh/latest/plugins/rocketmq-logger.md | 1 - docs/zh/latest/plugins/skywalking-logger.md | 57 +++++++++++++++++++ docs/zh/latest/plugins/sls-logger.md | 27 +++++++++ docs/zh/latest/plugins/splunk-hec-logging.md | 31 ++++++++++ docs/zh/latest/plugins/syslog.md | 6 ++ docs/zh/latest/plugins/tcp-logger.md | 40 +++++++++++++ docs/zh/latest/plugins/tencent-cloud-cls.md | 40 +++++++++++++ docs/zh/latest/plugins/udp-logger.md | 40 +++++++++++++ 32 files changed, 1047 insertions(+), 1 deletion(-) diff --git a/docs/en/latest/plugins/clickhouse-logger.md b/docs/en/latest/plugins/clickhouse-logger.md index feb6dd8a757b..f41a7aaec2e7 100644 --- a/docs/en/latest/plugins/clickhouse-logger.md +++ b/docs/en/latest/plugins/clickhouse-logger.md @@ -54,6 +54,49 @@ NOTE: `encrypt_fields = {"password"}` is also defined in the schema, which means This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. +### Example of default log format + +```json +{ + "response": { + "status": 200, + "size": 118, + "headers": { + "content-type": "text/plain", + "connection": "close", + "server": "APISIX/3.7.0", + "content-length": "12" + } + }, + "client_ip": "127.0.0.1", + "upstream_latency": 3, + "apisix_latency": 98.999998092651, + "upstream": "127.0.0.1:1982", + "latency": 101.99999809265, + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "route_id": "1", + "start_time": 1704507612177, + "service_id": "", + "request": { + "method": "POST", + "querystring": { + "foo": "unknown" + }, + "headers": { + "host": "localhost", + "connection": "close", + "content-length": "18" + }, + "size": 110, + "uri": "/hello?foo=unknown", + "url": "http://localhost:1984/hello?foo=unknown" + } +} +``` + ## Metadata You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: diff --git a/docs/en/latest/plugins/elasticsearch-logger.md b/docs/en/latest/plugins/elasticsearch-logger.md index 89a7a826fa69..06f70354f844 100644 --- a/docs/en/latest/plugins/elasticsearch-logger.md +++ b/docs/en/latest/plugins/elasticsearch-logger.md @@ -53,6 +53,46 @@ NOTE: `encrypt_fields = {"auth.password"}` is also defined in the schema, which This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. +### Example of default log format + +```json +{ + "upstream_latency": 2, + "apisix_latency": 100.9999256134, + "request": { + "size": 59, + "url": "http://localhost:1984/hello", + "method": "GET", + "querystring": {}, + "headers": { + "host": "localhost", + "connection": "close" + }, + "uri": "/hello" + }, + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "client_ip": "127.0.0.1", + "upstream": "127.0.0.1:1980", + "response": { + "status": 200, + "headers": { + "content-length": "12", + "connection": "close", + "content-type": "text/plain", + "server": "APISIX/3.7.0" + }, + "size": 118 + }, + "start_time": 1704524807607, + "route_id": "1", + "service_id": "", + "latency": 102.9999256134 +} +``` + ## Enable Plugin ### Full configuration diff --git a/docs/en/latest/plugins/error-log-logger.md b/docs/en/latest/plugins/error-log-logger.md index 889ea7f25d8e..f63e89a9bd4f 100644 --- a/docs/en/latest/plugins/error-log-logger.md +++ b/docs/en/latest/plugins/error-log-logger.md @@ -69,6 +69,12 @@ NOTE: `encrypt_fields = {"clickhouse.password"}` is also defined in the schema, This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. +### Example of default log format + +```text +["2024/01/06 16:04:30 [warn] 11786#9692271: *1 [lua] plugin.lua:205: load(): new plugins: {"error-log-logger":true}, context: init_worker_by_lua*","\n","2024/01/06 16:04:30 [warn] 11786#9692271: *1 [lua] plugin.lua:255: load_stream(): new plugins: {"limit-conn":true,"ip-restriction":true,"syslog":true,"mqtt-proxy":true}, context: init_worker_by_lua*","\n"] +``` + ## Enable Plugin To enable the Plugin, you can add it in your configuration file (`conf/config.yaml`): diff --git a/docs/en/latest/plugins/file-logger.md b/docs/en/latest/plugins/file-logger.md index f46b5a68c6cd..61df441ee9ea 100644 --- a/docs/en/latest/plugins/file-logger.md +++ b/docs/en/latest/plugins/file-logger.md @@ -53,6 +53,50 @@ The `file-logger` Plugin is used to push log streams to a specific location. | include_resp_body_expr | array | False | When the `include_resp_body` attribute is set to `true`, use this to filter based on [lua-resty-expr](https://github.com/api7/lua-resty-expr). If present, only logs the response into file if the expression evaluates to `true`. | | match | array[] | False | Logs will be recorded when the rule matching is successful if the option is set. See [lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list) for a list of available expressions. | +### Example of default log format + + ```json + { + "service_id": "", + "apisix_latency": 100.99999809265, + "start_time": 1703907485819, + "latency": 101.99999809265, + "upstream_latency": 1, + "client_ip": "127.0.0.1", + "route_id": "1", + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "request": { + "headers": { + "host": "127.0.0.1:1984", + "content-type": "application/x-www-form-urlencoded", + "user-agent": "lua-resty-http/0.16.1 (Lua) ngx_lua/10025", + "content-length": "12" + }, + "method": "POST", + "size": 194, + "url": "http://127.0.0.1:1984/hello?log_body=no", + "uri": "/hello?log_body=no", + "querystring": { + "log_body": "no" + } + }, + "response": { + "headers": { + "content-type": "text/plain", + "connection": "close", + "content-length": "12", + "server": "APISIX/3.7.0" + }, + "status": 200, + "size": 123 + }, + "upstream": "127.0.0.1:1982" + } + ``` + ## Metadata You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: diff --git a/docs/en/latest/plugins/google-cloud-logging.md b/docs/en/latest/plugins/google-cloud-logging.md index 459ee96f9822..3ea60f4db447 100644 --- a/docs/en/latest/plugins/google-cloud-logging.md +++ b/docs/en/latest/plugins/google-cloud-logging.md @@ -53,6 +53,36 @@ NOTE: `encrypt_fields = {"auth_config.private_key"}` is also defined in the sche This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. +### Example of default log format + +```json +{ + "insertId": "0013a6afc9c281ce2e7f413c01892bdc", + "labels": { + "source": "apache-apisix-google-cloud-logging" + }, + "logName": "projects/apisix/logs/apisix.apache.org%2Flogs", + "httpRequest": { + "requestMethod": "GET", + "requestUrl": "http://localhost:1984/hello", + "requestSize": 59, + "responseSize": 118, + "status": 200, + "remoteIp": "127.0.0.1", + "serverIp": "127.0.0.1:1980", + "latency": "0.103s" + }, + "resource": { + "type": "global" + }, + "jsonPayload": { + "service_id": "", + "route_id": "1" + }, + "timestamp": "2024-01-06T03:34:45.065Z" +} +``` + ## Metadata You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: diff --git a/docs/en/latest/plugins/http-logger.md b/docs/en/latest/plugins/http-logger.md index aef965f0a954..4ad87acd07c4 100644 --- a/docs/en/latest/plugins/http-logger.md +++ b/docs/en/latest/plugins/http-logger.md @@ -54,6 +54,50 @@ This Plugin supports using batch processors to aggregate and process entries (lo ::: +### Example of default log format + + ```json + { + "service_id": "", + "apisix_latency": 100.99999809265, + "start_time": 1703907485819, + "latency": 101.99999809265, + "upstream_latency": 1, + "client_ip": "127.0.0.1", + "route_id": "1", + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "request": { + "headers": { + "host": "127.0.0.1:1984", + "content-type": "application/x-www-form-urlencoded", + "user-agent": "lua-resty-http/0.16.1 (Lua) ngx_lua/10025", + "content-length": "12" + }, + "method": "POST", + "size": 194, + "url": "http://127.0.0.1:1984/hello?log_body=no", + "uri": "/hello?log_body=no", + "querystring": { + "log_body": "no" + } + }, + "response": { + "headers": { + "content-type": "text/plain", + "connection": "close", + "content-length": "12", + "server": "APISIX/3.7.0" + }, + "status": 200, + "size": 123 + }, + "upstream": "127.0.0.1:1982" + } + ``` + ## Metadata You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: diff --git a/docs/en/latest/plugins/loggly.md b/docs/en/latest/plugins/loggly.md index 663ba05742e4..c7318ce76850 100644 --- a/docs/en/latest/plugins/loggly.md +++ b/docs/en/latest/plugins/loggly.md @@ -53,6 +53,12 @@ This Plugin supports using batch processors to aggregate and process entries (lo To generate a Customer token, go to `/loggly.com/tokens` or navigate to Logs > Source setup > Customer tokens. +### Example of default log format + +```text +<10>1 2024-01-06T06:50:51.739Z 127.0.0.1 apisix 58525 - [token-1@41058 tag="apisix"] {"service_id":"","server":{"version":"3.7.0","hostname":"localhost"},"apisix_latency":100.99985313416,"request":{"url":"http://127.0.0.1:1984/opentracing","headers":{"content-type":"application/x-www-form-urlencoded","user-agent":"lua-resty-http/0.16.1 (Lua) ngx_lua/10025","host":"127.0.0.1:1984"},"querystring":{},"uri":"/opentracing","size":155,"method":"GET"},"response":{"headers":{"content-type":"text/plain","server":"APISIX/3.7.0","transfer-encoding":"chunked","connection":"close"},"size":141,"status":200},"route_id":"1","latency":103.99985313416,"upstream_latency":3,"client_ip":"127.0.0.1","upstream":"127.0.0.1:1982","start_time":1704523851634} +``` + ## Metadata You can also configure the Plugin through Plugin metadata. The following configurations are available: diff --git a/docs/en/latest/plugins/loki-logger.md b/docs/en/latest/plugins/loki-logger.md index e79e5396effb..2a9e160b962b 100644 --- a/docs/en/latest/plugins/loki-logger.md +++ b/docs/en/latest/plugins/loki-logger.md @@ -55,6 +55,48 @@ When the Plugin is enabled, APISIX will serialize the request context informatio This plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. +### Example of default log format + +```json +{ + "request": { + "headers": { + "connection": "close", + "host": "localhost", + "test-header": "only-for-test#1" + }, + "method": "GET", + "uri": "/hello", + "url": "http://localhost:1984/hello", + "size": 89, + "querystring": {} + }, + "client_ip": "127.0.0.1", + "start_time": 1704525701293, + "apisix_latency": 100.99994659424, + "response": { + "headers": { + "content-type": "text/plain", + "server": "APISIX/3.7.0", + "content-length": "12", + "connection": "close" + }, + "status": 200, + "size": 118 + }, + "route_id": "1", + "loki_log_time": "1704525701293000000", + "upstream_latency": 5, + "latency": 105.99994659424, + "upstream": "127.0.0.1:1980", + "server": { + "hostname": "localhost", + "version": "3.7.0" + }, + "service_id": "" +} +``` + ## Metadata You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: diff --git a/docs/en/latest/plugins/rocketmq-logger.md b/docs/en/latest/plugins/rocketmq-logger.md index 3f50b07867f7..324ecfe51d9b 100644 --- a/docs/en/latest/plugins/rocketmq-logger.md +++ b/docs/en/latest/plugins/rocketmq-logger.md @@ -66,6 +66,61 @@ If the process is successful, it will return `true` and if it fails, returns `ni ### meta_format example +- default: + +```json + { + "upstream": "127.0.0.1:1980", + "start_time": 1619414294760, + "client_ip": "127.0.0.1", + "service_id": "", + "route_id": "1", + "request": { + "querystring": { + "ab": "cd" + }, + "size": 90, + "uri": "/hello?ab=cd", + "url": "http://localhost:1984/hello?ab=cd", + "headers": { + "host": "localhost", + "content-length": "6", + "connection": "close" + }, + "method": "GET" + }, + "response": { + "headers": { + "connection": "close", + "content-type": "text/plain; charset=utf-8", + "date": "Mon, 26 Apr 2021 05:18:14 GMT", + "server": "APISIX/2.5", + "transfer-encoding": "chunked" + }, + "size": 190, + "status": 200 + }, + "server": { + "hostname": "localhost", + "version": "2.5" + }, + "latency": 0 + } +``` + +- origin: + +```http + GET /hello?ab=cd HTTP/1.1 + host: localhost + content-length: 6 + connection: close + + abcdef +``` + +### meta_format example + - `default`: ```json diff --git a/docs/en/latest/plugins/skywalking-logger.md b/docs/en/latest/plugins/skywalking-logger.md index df4c786fd492..b72ec5577e62 100644 --- a/docs/en/latest/plugins/skywalking-logger.md +++ b/docs/en/latest/plugins/skywalking-logger.md @@ -47,6 +47,63 @@ If there is an existing tracing context, it sets up the trace-log correlation au This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. +### Example of default log format + + ```json + { + "serviceInstance": "APISIX Instance Name", + "body": { + "json": { + "json": "body-json" + } + }, + "endpoint": "/opentracing", + "service": "APISIX" + } + ``` + +For body-json data, it is an escaped json string + + ```json + { + "response": { + "status": 200, + "headers": { + "server": "APISIX/3.7.0", + "content-type": "text/plain", + "transfer-encoding": "chunked", + "connection": "close" + }, + "size": 136 + }, + "route_id": "1", + "upstream": "127.0.0.1:1982", + "upstream_latency": 8, + "apisix_latency": 101.00020599365, + "client_ip": "127.0.0.1", + "service_id": "", + "server": { + "hostname": "localhost", + "version": "3.7.0" + }, + "start_time": 1704429712768, + "latency": 109.00020599365, + "request": { + "headers": { + "content-length": "9", + "host": "localhost", + "connection": "close" + }, + "method": "POST", + "body": "body-data", + "size": 94, + "querystring": {}, + "url": "http://localhost:1984/opentracing", + "uri": "/opentracing" + } + } + ``` + ## Metadata You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: diff --git a/docs/en/latest/plugins/sls-logger.md b/docs/en/latest/plugins/sls-logger.md index 26808b8cba30..47dc9449bbcf 100644 --- a/docs/en/latest/plugins/sls-logger.md +++ b/docs/en/latest/plugins/sls-logger.md @@ -52,6 +52,33 @@ NOTE: `encrypt_fields = {"access_key_secret"}` is also defined in the schema, wh This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. +### Example of default log format + +```json +{ + "route_conf": { + "host": "100.100.99.135", + "buffer_duration": 60, + "timeout": 30000, + "include_req_body": false, + "logstore": "your_logstore", + "log_format": { + "vip": "$remote_addr" + }, + "project": "your_project", + "inactive_timeout": 5, + "access_key_id": "your_access_key_id", + "access_key_secret": "your_access_key_secret", + "batch_max_size": 1000, + "max_retry_count": 0, + "retry_delay": 1, + "port": 10009, + "name": "sls-logger" + }, + "data": "<46>1 2024-01-06T03:29:56.457Z localhost apisix 28063 - [logservice project=\"your_project\" logstore=\"your_logstore\" access-key-id=\"your_access_key_id\" access-key-secret=\"your_access_key_secret\"] {\"vip\":\"127.0.0.1\",\"route_id\":\"1\"}\n" +} +``` + ## Metadata You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: diff --git a/docs/en/latest/plugins/splunk-hec-logging.md b/docs/en/latest/plugins/splunk-hec-logging.md index bdddfd7fafdf..acfa77468434 100644 --- a/docs/en/latest/plugins/splunk-hec-logging.md +++ b/docs/en/latest/plugins/splunk-hec-logging.md @@ -48,6 +48,37 @@ When the Plugin is enabled, APISIX will serialize the request context informatio This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. +### Example of default log format + +```json +{ + "sourcetype": "_json", + "time": 1704513555.392, + "event": { + "upstream": "127.0.0.1:1980", + "request_url": "http://localhost:1984/hello", + "request_query": {}, + "request_size": 59, + "response_headers": { + "content-length": "12", + "server": "APISIX/3.7.0", + "content-type": "text/plain", + "connection": "close" + }, + "response_status": 200, + "response_size": 118, + "latency": 108.00004005432, + "request_method": "GET", + "request_headers": { + "connection": "close", + "host": "localhost" + } + }, + "source": "apache-apisix-splunk-hec-logging", + "host": "localhost" +} +``` + ## Metadata You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: diff --git a/docs/en/latest/plugins/syslog.md b/docs/en/latest/plugins/syslog.md index d8f107f36c74..1a7e5e4a8f79 100644 --- a/docs/en/latest/plugins/syslog.md +++ b/docs/en/latest/plugins/syslog.md @@ -50,6 +50,12 @@ Logs can be set as JSON objects. This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. +### meta_format example + +```text +"<46>1 2024-01-06T02:30:59.145Z 127.0.0.1 apisix 82324 - - {\"response\":{\"status\":200,\"size\":141,\"headers\":{\"content-type\":\"text/plain\",\"server\":\"APISIX/3.7.0\",\"transfer-encoding\":\"chunked\",\"connection\":\"close\"}},\"route_id\":\"1\",\"server\":{\"hostname\":\"baiyundeMacBook-Pro.local\",\"version\":\"3.7.0\"},\"request\":{\"uri\":\"/opentracing\",\"url\":\"http://127.0.0.1:1984/opentracing\",\"querystring\":{},\"method\":\"GET\",\"size\":155,\"headers\":{\"content-type\":\"application/x-www-form-urlencoded\",\"host\":\"127.0.0.1:1984\",\"user-agent\":\"lua-resty-http/0.16.1 (Lua) ngx_lua/10025\"}},\"upstream\":\"127.0.0.1:1982\",\"apisix_latency\":100.99999809265,\"service_id\":\"\",\"upstream_latency\":1,\"start_time\":1704508259044,\"client_ip\":\"127.0.0.1\",\"latency\":101.99999809265}\n" +``` + ## Metadata You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: diff --git a/docs/en/latest/plugins/tcp-logger.md b/docs/en/latest/plugins/tcp-logger.md index d1af5b11b497..e5bffac3500e 100644 --- a/docs/en/latest/plugins/tcp-logger.md +++ b/docs/en/latest/plugins/tcp-logger.md @@ -50,6 +50,46 @@ This plugin also allows to push logs as a batch to your external TCP server. It This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. +### Example of default log format + +```json +{ + "response": { + "status": 200, + "headers": { + "server": "APISIX/3.7.0", + "content-type": "text/plain", + "content-length": "12", + "connection": "close" + }, + "size": 118 + }, + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "start_time": 1704527628474, + "client_ip": "127.0.0.1", + "service_id": "", + "latency": 102.9999256134, + "apisix_latency": 100.9999256134, + "upstream_latency": 2, + "request": { + "headers": { + "connection": "close", + "host": "localhost" + }, + "size": 59, + "method": "GET", + "uri": "/hello", + "url": "http://localhost:1984/hello", + "querystring": {} + }, + "upstream": "127.0.0.1:1980", + "route_id": "1" +} +``` + ## Metadata You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: diff --git a/docs/en/latest/plugins/tencent-cloud-cls.md b/docs/en/latest/plugins/tencent-cloud-cls.md index 559f13e2d4f0..9895dc564a81 100644 --- a/docs/en/latest/plugins/tencent-cloud-cls.md +++ b/docs/en/latest/plugins/tencent-cloud-cls.md @@ -52,6 +52,46 @@ NOTE: `encrypt_fields = {"secret_key"}` is also defined in the schema, which mea This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. +### Example of default log format + +```json +{ + "response": { + "headers": { + "content-type": "text/plain", + "connection": "close", + "server": "APISIX/3.7.0", + "transfer-encoding": "chunked" + }, + "size": 136, + "status": 200 + }, + "route_id": "1", + "upstream": "127.0.0.1:1982", + "client_ip": "127.0.0.1", + "apisix_latency": 100.99985313416, + "service_id": "", + "latency": 103.99985313416, + "start_time": 1704525145772, + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "upstream_latency": 3, + "request": { + "headers": { + "connection": "close", + "host": "localhost" + }, + "url": "http://localhost:1984/opentracing", + "querystring": {}, + "method": "GET", + "size": 65, + "uri": "/opentracing" + } +} +``` + ## Metadata You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: diff --git a/docs/en/latest/plugins/udp-logger.md b/docs/en/latest/plugins/udp-logger.md index 57d52b5948a6..e3acd0030a00 100644 --- a/docs/en/latest/plugins/udp-logger.md +++ b/docs/en/latest/plugins/udp-logger.md @@ -48,6 +48,46 @@ This plugin also allows to push logs as a batch to your external UDP server. It This Plugin supports using batch processors to aggregate and process entries (logs/data) in a batch. This avoids the need for frequently submitting the data. The batch processor submits data every `5` seconds or when the data in the queue reaches `1000`. See [Batch Processor](../batch-processor.md#configuration) for more information or setting your custom configuration. +### Example of default log format + +```json +{ + "apisix_latency": 99.999988555908, + "service_id": "", + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "request": { + "method": "GET", + "headers": { + "connection": "close", + "host": "localhost" + }, + "url": "http://localhost:1984/opentracing", + "size": 65, + "querystring": {}, + "uri": "/opentracing" + }, + "start_time": 1704527399740, + "client_ip": "127.0.0.1", + "response": { + "status": 200, + "size": 136, + "headers": { + "server": "APISIX/3.7.0", + "content-type": "text/plain", + "transfer-encoding": "chunked", + "connection": "close" + } + }, + "upstream": "127.0.0.1:1982", + "route_id": "1", + "upstream_latency": 12, + "latency": 111.99998855591 +} +``` + ## Metadata You can also set the format of the logs by configuring the Plugin metadata. The following configurations are available: diff --git a/docs/zh/latest/plugins/clickhouse-logger.md b/docs/zh/latest/plugins/clickhouse-logger.md index 09d4c512f704..f719a40e7848 100644 --- a/docs/zh/latest/plugins/clickhouse-logger.md +++ b/docs/zh/latest/plugins/clickhouse-logger.md @@ -54,6 +54,49 @@ description: 本文介绍了 API 网关 Apache APISIX 如何使用 clickhouse-lo 该插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认情况下批处理器每 `5` 秒钟或队列中的数据达到 `1000` 条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。 +### 默认日志格式示例 + +```json +{ + "response": { + "status": 200, + "size": 118, + "headers": { + "content-type": "text/plain", + "connection": "close", + "server": "APISIX/3.7.0", + "content-length": "12" + } + }, + "client_ip": "127.0.0.1", + "upstream_latency": 3, + "apisix_latency": 98.999998092651, + "upstream": "127.0.0.1:1982", + "latency": 101.99999809265, + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "route_id": "1", + "start_time": 1704507612177, + "service_id": "", + "request": { + "method": "POST", + "querystring": { + "foo": "unknown" + }, + "headers": { + "host": "localhost", + "connection": "close", + "content-length": "18" + }, + "size": 110, + "uri": "/hello?foo=unknown", + "url": "http://localhost:1984/hello?foo=unknown" + } +} +``` + ## 配置插件元数据 `clickhouse-logger` 也支持自定义日志格式,与 [http-logger](./http-logger.md) 插件类似。 diff --git a/docs/zh/latest/plugins/elasticsearch-logger.md b/docs/zh/latest/plugins/elasticsearch-logger.md index e15e84783dad..d97311b17ffb 100644 --- a/docs/zh/latest/plugins/elasticsearch-logger.md +++ b/docs/zh/latest/plugins/elasticsearch-logger.md @@ -54,6 +54,46 @@ description: 本文介绍了 API 网关 Apache APISIX 的 elasticsearch-logger 本插件支持使用批处理器来聚合并批量处理条目(日志和数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到 `1000` 条时提交数据,如需了解或自定义批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置) 配置部分。 +### 默认日志格式示例 + +```json +{ + "upstream_latency": 2, + "apisix_latency": 100.9999256134, + "request": { + "size": 59, + "url": "http://localhost:1984/hello", + "method": "GET", + "querystring": {}, + "headers": { + "host": "localhost", + "connection": "close" + }, + "uri": "/hello" + }, + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "client_ip": "127.0.0.1", + "upstream": "127.0.0.1:1980", + "response": { + "status": 200, + "headers": { + "content-length": "12", + "connection": "close", + "content-type": "text/plain", + "server": "APISIX/3.7.0" + }, + "size": 118 + }, + "start_time": 1704524807607, + "route_id": "1", + "service_id": "", + "latency": 102.9999256134 +} +``` + ## 启用插件 你可以通过如下命令在指定路由上启用 `elasticsearch-logger` 插件: diff --git a/docs/zh/latest/plugins/error-log-logger.md b/docs/zh/latest/plugins/error-log-logger.md index cc3a34b41b79..f8fab500600a 100644 --- a/docs/zh/latest/plugins/error-log-logger.md +++ b/docs/zh/latest/plugins/error-log-logger.md @@ -68,6 +68,12 @@ description: API 网关 Apache APISIX error-log-logger 插件用于将 APISIX 本插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到 `1000` 条时提交数据,如需了解或自定义批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置) 配置部分。 +### 默认日志格式示例 + +```text +["2024/01/06 16:04:30 [warn] 11786#9692271: *1 [lua] plugin.lua:205: load(): new plugins: {"error-log-logger":true}, context: init_worker_by_lua*","\n","2024/01/06 16:04:30 [warn] 11786#9692271: *1 [lua] plugin.lua:255: load_stream(): new plugins: {"limit-conn":true,"ip-restriction":true,"syslog":true,"mqtt-proxy":true}, context: init_worker_by_lua*","\n"] +``` + ## 启用插件 该插件默认为禁用状态,你可以在 `./conf/config.yaml` 中启用 `error-log-logger` 插件。你可以参考如下示例启用插件: diff --git a/docs/zh/latest/plugins/file-logger.md b/docs/zh/latest/plugins/file-logger.md index ddb9b646b739..87cd6e6ae041 100644 --- a/docs/zh/latest/plugins/file-logger.md +++ b/docs/zh/latest/plugins/file-logger.md @@ -55,6 +55,50 @@ description: API 网关 Apache APISIX file-logger 插件可用于将日志数据 | include_resp_body_expr | array | 否 | 当 `include_resp_body` 属性设置为 `true` 时,使用该属性并基于 [lua-resty-expr](https://github.com/api7/lua-resty-expr) 进行过滤。如果存在,则仅在表达式计算结果为 `true` 时记录响应。 | | match | array[] | 否 | 当设置了这个选项后,只有匹配规则的日志才会被记录。`match` 是一个表达式列表,具体请参考 [lua-resty-expr](https://github.com/api7/lua-resty-expr#operator-list)。 | +### 默认日志格式示例 + + ```json + { + "service_id": "", + "apisix_latency": 100.99999809265, + "start_time": 1703907485819, + "latency": 101.99999809265, + "upstream_latency": 1, + "client_ip": "127.0.0.1", + "route_id": "1", + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "request": { + "headers": { + "host": "127.0.0.1:1984", + "content-type": "application/x-www-form-urlencoded", + "user-agent": "lua-resty-http/0.16.1 (Lua) ngx_lua/10025", + "content-length": "12" + }, + "method": "POST", + "size": 194, + "url": "http://127.0.0.1:1984/hello?log_body=no", + "uri": "/hello?log_body=no", + "querystring": { + "log_body": "no" + } + }, + "response": { + "headers": { + "content-type": "text/plain", + "connection": "close", + "content-length": "12", + "server": "APISIX/3.7.0" + }, + "status": 200, + "size": 123 + }, + "upstream": "127.0.0.1:1982" + } + ``` + ## 插件元数据设置 | 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述 | diff --git a/docs/zh/latest/plugins/google-cloud-logging.md b/docs/zh/latest/plugins/google-cloud-logging.md index 693ae2f7cb43..a0bf33a9f461 100644 --- a/docs/zh/latest/plugins/google-cloud-logging.md +++ b/docs/zh/latest/plugins/google-cloud-logging.md @@ -53,6 +53,36 @@ description: API 网关 Apache APISIX 的 google-cloud-logging 插件可用于 该插件支持使用批处理器来聚合并批量处理条目(日志和数据)。这样可以避免该插件频繁地提交数据。默认情况下每 `5` 秒钟或队列中的数据达到 `1000` 条时,批处理器会自动提交数据,如需了解更多信息或自定义配置,请参考 [Batch Processor](../batch-processor.md#配置)。 +### 默认日志格式示例 + +```json +{ + "insertId": "0013a6afc9c281ce2e7f413c01892bdc", + "labels": { + "source": "apache-apisix-google-cloud-logging" + }, + "logName": "projects/apisix/logs/apisix.apache.org%2Flogs", + "httpRequest": { + "requestMethod": "GET", + "requestUrl": "http://localhost:1984/hello", + "requestSize": 59, + "responseSize": 118, + "status": 200, + "remoteIp": "127.0.0.1", + "serverIp": "127.0.0.1:1980", + "latency": "0.103s" + }, + "resource": { + "type": "global" + }, + "jsonPayload": { + "service_id": "", + "route_id": "1" + }, + "timestamp": "2024-01-06T03:34:45.065Z" +} +``` + ## 插件元数据 | 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述 | diff --git a/docs/zh/latest/plugins/http-logger.md b/docs/zh/latest/plugins/http-logger.md index 06fc120308cd..7ea6e1242d86 100644 --- a/docs/zh/latest/plugins/http-logger.md +++ b/docs/zh/latest/plugins/http-logger.md @@ -50,6 +50,50 @@ description: 本文介绍了 API 网关 Apache APISIX 的 http-logger 插件。 该插件支持使用批处理器来聚合并批量处理条目(日志和数据)。这样可以避免该插件频繁地提交数据。默认情况下每 `5` 秒钟或队列中的数据达到 `1000` 条时,批处理器会自动提交数据,如需了解更多信息或自定义配置,请参考 [Batch Processor](../batch-processor.md#配置)。 +### 默认日志格式示例 + + ```json + { + "service_id": "", + "apisix_latency": 100.99999809265, + "start_time": 1703907485819, + "latency": 101.99999809265, + "upstream_latency": 1, + "client_ip": "127.0.0.1", + "route_id": "1", + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "request": { + "headers": { + "host": "127.0.0.1:1984", + "content-type": "application/x-www-form-urlencoded", + "user-agent": "lua-resty-http/0.16.1 (Lua) ngx_lua/10025", + "content-length": "12" + }, + "method": "POST", + "size": 194, + "url": "http://127.0.0.1:1984/hello?log_body=no", + "uri": "/hello?log_body=no", + "querystring": { + "log_body": "no" + } + }, + "response": { + "headers": { + "content-type": "text/plain", + "connection": "close", + "content-length": "12", + "server": "APISIX/3.7.0" + }, + "status": 200, + "size": 123 + }, + "upstream": "127.0.0.1:1982" + } + ``` + ## 插件元数据 | 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述 | diff --git a/docs/zh/latest/plugins/loggly.md b/docs/zh/latest/plugins/loggly.md index 9c5b74010696..27d813c4a9bd 100644 --- a/docs/zh/latest/plugins/loggly.md +++ b/docs/zh/latest/plugins/loggly.md @@ -50,6 +50,12 @@ description: API 网关 Apache APISIX loggly 插件可用于将日志转发到 S 如果要生成用户令牌,请在 Loggly 系统中的 `/loggly.com/tokens` 设置,或者在系统中单击 `Logs > Source setup > Customer tokens`。 +### 默认日志格式示例 + +```text +<10>1 2024-01-06T06:50:51.739Z 127.0.0.1 apisix 58525 - [token-1@41058 tag="apisix"] {"service_id":"","server":{"version":"3.7.0","hostname":"localhost"},"apisix_latency":100.99985313416,"request":{"url":"http://127.0.0.1:1984/opentracing","headers":{"content-type":"application/x-www-form-urlencoded","user-agent":"lua-resty-http/0.16.1 (Lua) ngx_lua/10025","host":"127.0.0.1:1984"},"querystring":{},"uri":"/opentracing","size":155,"method":"GET"},"response":{"headers":{"content-type":"text/plain","server":"APISIX/3.7.0","transfer-encoding":"chunked","connection":"close"},"size":141,"status":200},"route_id":"1","latency":103.99985313416,"upstream_latency":3,"client_ip":"127.0.0.1","upstream":"127.0.0.1:1982","start_time":1704523851634} +``` + ## 插件元数据设置 你还可以通过插件元数据配置插件。详细配置如下: diff --git a/docs/zh/latest/plugins/loki-logger.md b/docs/zh/latest/plugins/loki-logger.md index 37ce2398a31c..c39be2cba8c6 100644 --- a/docs/zh/latest/plugins/loki-logger.md +++ b/docs/zh/latest/plugins/loki-logger.md @@ -55,6 +55,48 @@ description: 本文件包含关于 Apache APISIX loki-logger 插件的信息。 该插件支持使用批处理器对条目(日志/数据)进行批量聚合和处理,避免了频繁提交数据的需求。批处理器每隔 `5` 秒或当队列中的数据达到 `1000` 时提交数据。有关更多信息或设置自定义配置,请参阅 [批处理器](../batch-processor.md#configuration)。 +### 默认日志格式示例 + +```json +{ + "request": { + "headers": { + "connection": "close", + "host": "localhost", + "test-header": "only-for-test#1" + }, + "method": "GET", + "uri": "/hello", + "url": "http://localhost:1984/hello", + "size": 89, + "querystring": {} + }, + "client_ip": "127.0.0.1", + "start_time": 1704525701293, + "apisix_latency": 100.99994659424, + "response": { + "headers": { + "content-type": "text/plain", + "server": "APISIX/3.7.0", + "content-length": "12", + "connection": "close" + }, + "status": 200, + "size": 118 + }, + "route_id": "1", + "loki_log_time": "1704525701293000000", + "upstream_latency": 5, + "latency": 105.99994659424, + "upstream": "127.0.0.1:1980", + "server": { + "hostname": "localhost", + "version": "3.7.0" + }, + "service_id": "" +} +``` + ## 元数据 您还可以通过配置插件元数据来设置日志的格式。以下配置项可供选择: diff --git a/docs/zh/latest/plugins/rocketmq-logger.md b/docs/zh/latest/plugins/rocketmq-logger.md index 6bef7a6b4ffe..faec6c48d3fc 100644 --- a/docs/zh/latest/plugins/rocketmq-logger.md +++ b/docs/zh/latest/plugins/rocketmq-logger.md @@ -86,7 +86,6 @@ description: API 网关 Apache APISIX 的 rocketmq-logger 插件用于将日志 "content-length": "6", "connection": "close" }, - "body": "abcdef", "method": "GET" }, "response": { diff --git a/docs/zh/latest/plugins/skywalking-logger.md b/docs/zh/latest/plugins/skywalking-logger.md index 3eb837e9fcb4..87cabb3b450a 100644 --- a/docs/zh/latest/plugins/skywalking-logger.md +++ b/docs/zh/latest/plugins/skywalking-logger.md @@ -49,6 +49,63 @@ description: 本文将介绍 API 网关 Apache APISIX 如何通过 skywalking-lo 该插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到 `1000` 条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。 +### 默认日志格式示例 + + ```json + { + "serviceInstance": "APISIX Instance Name", + "body": { + "json": { + "json": "body-json" + } + }, + "endpoint": "/opentracing", + "service": "APISIX" + } + ``` + +对于 body-json 数据,它是一个转义后的 json 字符串,格式化后如下: + + ```json + { + "response": { + "status": 200, + "headers": { + "server": "APISIX/3.7.0", + "content-type": "text/plain", + "transfer-encoding": "chunked", + "connection": "close" + }, + "size": 136 + }, + "route_id": "1", + "upstream": "127.0.0.1:1982", + "upstream_latency": 8, + "apisix_latency": 101.00020599365, + "client_ip": "127.0.0.1", + "service_id": "", + "server": { + "hostname": "localhost", + "version": "3.7.0" + }, + "start_time": 1704429712768, + "latency": 109.00020599365, + "request": { + "headers": { + "content-length": "9", + "host": "localhost", + "connection": "close" + }, + "method": "POST", + "body": "body-data", + "size": 94, + "querystring": {}, + "url": "http://localhost:1984/opentracing", + "uri": "/opentracing" + } + } + ``` + ## 配置插件元数据 `skywalking-logger` 也支持自定义日志格式,与 [http-logger](./http-logger.md) 插件类似。 diff --git a/docs/zh/latest/plugins/sls-logger.md b/docs/zh/latest/plugins/sls-logger.md index d5b57a21b569..7bd85c81beaf 100644 --- a/docs/zh/latest/plugins/sls-logger.md +++ b/docs/zh/latest/plugins/sls-logger.md @@ -49,6 +49,33 @@ title: sls-logger 本插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到 `1000` 条时提交数据,如需了解或自定义批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置) 配置部分。 +### 默认日志格式示例 + +```json +{ + "route_conf": { + "host": "100.100.99.135", + "buffer_duration": 60, + "timeout": 30000, + "include_req_body": false, + "logstore": "your_logstore", + "log_format": { + "vip": "$remote_addr" + }, + "project": "your_project", + "inactive_timeout": 5, + "access_key_id": "your_access_key_id", + "access_key_secret": "your_access_key_secret", + "batch_max_size": 1000, + "max_retry_count": 0, + "retry_delay": 1, + "port": 10009, + "name": "sls-logger" + }, + "data": "<46>1 2024-01-06T03:29:56.457Z localhost apisix 28063 - [logservice project=\"your_project\" logstore=\"your_logstore\" access-key-id=\"your_access_key_id\" access-key-secret=\"your_access_key_secret\"] {\"vip\":\"127.0.0.1\",\"route_id\":\"1\"}\n" +} +``` + ## 插件元数据设置 | 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述 | diff --git a/docs/zh/latest/plugins/splunk-hec-logging.md b/docs/zh/latest/plugins/splunk-hec-logging.md index 112e8fedc028..abaa12406116 100644 --- a/docs/zh/latest/plugins/splunk-hec-logging.md +++ b/docs/zh/latest/plugins/splunk-hec-logging.md @@ -48,6 +48,37 @@ description: API 网关 Apache APISIX 的 splunk-hec-logging 插件可用于将 本插件支持使用批处理器来聚合并批量处理条目(日志和数据)。这样可以避免该插件频繁地提交数据。默认情况下每 `5` 秒钟或队列中的数据达到 `1000` 条时,批处理器会自动提交数据,如需了解更多信息或自定义配置,请参考 [Batch-Processor](../batch-processor.md#配置)。 +### 默认日志格式示例 + +```json +{ + "sourcetype": "_json", + "time": 1704513555.392, + "event": { + "upstream": "127.0.0.1:1980", + "request_url": "http://localhost:1984/hello", + "request_query": {}, + "request_size": 59, + "response_headers": { + "content-length": "12", + "server": "APISIX/3.7.0", + "content-type": "text/plain", + "connection": "close" + }, + "response_status": 200, + "response_size": 118, + "latency": 108.00004005432, + "request_method": "GET", + "request_headers": { + "connection": "close", + "host": "localhost" + } + }, + "source": "apache-apisix-splunk-hec-logging", + "host": "localhost" +} +``` + ## 插件元数据 | 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述 | diff --git a/docs/zh/latest/plugins/syslog.md b/docs/zh/latest/plugins/syslog.md index d32d8cddb5d9..4707bba8b6fa 100644 --- a/docs/zh/latest/plugins/syslog.md +++ b/docs/zh/latest/plugins/syslog.md @@ -53,6 +53,12 @@ description: API 网关 Apache APISIX syslog 插件可用于将日志推送到 S 该插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认情况下批处理器每 `5` 秒钟或队列中的数据达到 `1000` 条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。 +### 默认日志格式示例 + +```text +"<46>1 2024-01-06T02:30:59.145Z 127.0.0.1 apisix 82324 - - {\"response\":{\"status\":200,\"size\":141,\"headers\":{\"content-type\":\"text/plain\",\"server\":\"APISIX/3.7.0\",\"transfer-encoding\":\"chunked\",\"connection\":\"close\"}},\"route_id\":\"1\",\"server\":{\"hostname\":\"baiyundeMacBook-Pro.local\",\"version\":\"3.7.0\"},\"request\":{\"uri\":\"/opentracing\",\"url\":\"http://127.0.0.1:1984/opentracing\",\"querystring\":{},\"method\":\"GET\",\"size\":155,\"headers\":{\"content-type\":\"application/x-www-form-urlencoded\",\"host\":\"127.0.0.1:1984\",\"user-agent\":\"lua-resty-http/0.16.1 (Lua) ngx_lua/10025\"}},\"upstream\":\"127.0.0.1:1982\",\"apisix_latency\":100.99999809265,\"service_id\":\"\",\"upstream_latency\":1,\"start_time\":1704508259044,\"client_ip\":\"127.0.0.1\",\"latency\":101.99999809265}\n" +``` + ## 插件元数据 | 名称 | 类型 | 必选项 | 默认值 | 描述 | diff --git a/docs/zh/latest/plugins/tcp-logger.md b/docs/zh/latest/plugins/tcp-logger.md index 6a950784d8b6..3984fb1d407a 100644 --- a/docs/zh/latest/plugins/tcp-logger.md +++ b/docs/zh/latest/plugins/tcp-logger.md @@ -47,6 +47,46 @@ description: 本文介绍了 API 网关 Apache APISIX 如何使用 tcp-logger 该插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认情况下批处理器每 `5` 秒钟或队列中的数据达到 `1000` 条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。 +### 默认日志格式示例 + +```json +{ + "response": { + "status": 200, + "headers": { + "server": "APISIX/3.7.0", + "content-type": "text/plain", + "content-length": "12", + "connection": "close" + }, + "size": 118 + }, + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "start_time": 1704527628474, + "client_ip": "127.0.0.1", + "service_id": "", + "latency": 102.9999256134, + "apisix_latency": 100.9999256134, + "upstream_latency": 2, + "request": { + "headers": { + "connection": "close", + "host": "localhost" + }, + "size": 59, + "method": "GET", + "uri": "/hello", + "url": "http://localhost:1984/hello", + "querystring": {} + }, + "upstream": "127.0.0.1:1980", + "route_id": "1" +} +``` + ## 插件元数据 | 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述 | diff --git a/docs/zh/latest/plugins/tencent-cloud-cls.md b/docs/zh/latest/plugins/tencent-cloud-cls.md index 2d567f8c102f..88bff5b06619 100644 --- a/docs/zh/latest/plugins/tencent-cloud-cls.md +++ b/docs/zh/latest/plugins/tencent-cloud-cls.md @@ -50,6 +50,46 @@ description: API 网关 Apache APISIX tencent-cloud-cls 插件可用于将日志 该插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认情况下批处理器每 `5` 秒钟或队列中的数据达到 `1000` 条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。 +### 默认日志格式示例 + +```json +{ + "response": { + "headers": { + "content-type": "text/plain", + "connection": "close", + "server": "APISIX/3.7.0", + "transfer-encoding": "chunked" + }, + "size": 136, + "status": 200 + }, + "route_id": "1", + "upstream": "127.0.0.1:1982", + "client_ip": "127.0.0.1", + "apisix_latency": 100.99985313416, + "service_id": "", + "latency": 103.99985313416, + "start_time": 1704525145772, + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "upstream_latency": 3, + "request": { + "headers": { + "connection": "close", + "host": "localhost" + }, + "url": "http://localhost:1984/opentracing", + "querystring": {}, + "method": "GET", + "size": 65, + "uri": "/opentracing" + } +} +``` + ## 插件元数据 | 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述 | diff --git a/docs/zh/latest/plugins/udp-logger.md b/docs/zh/latest/plugins/udp-logger.md index 0966aaadfd80..00f00d641703 100644 --- a/docs/zh/latest/plugins/udp-logger.md +++ b/docs/zh/latest/plugins/udp-logger.md @@ -46,6 +46,46 @@ description: 本文介绍了 API 网关 Apache APISIX 如何使用 udp-logger 该插件支持使用批处理器来聚合并批量处理条目(日志和数据)。这样可以避免插件频繁地提交数据,默认情况下批处理器每 `5` 秒钟或队列中的数据达到 `1000` 条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。 +### 默认日志格式数据 + +```json +{ + "apisix_latency": 99.999988555908, + "service_id": "", + "server": { + "version": "3.7.0", + "hostname": "localhost" + }, + "request": { + "method": "GET", + "headers": { + "connection": "close", + "host": "localhost" + }, + "url": "http://localhost:1984/opentracing", + "size": 65, + "querystring": {}, + "uri": "/opentracing" + }, + "start_time": 1704527399740, + "client_ip": "127.0.0.1", + "response": { + "status": 200, + "size": 136, + "headers": { + "server": "APISIX/3.7.0", + "content-type": "text/plain", + "transfer-encoding": "chunked", + "connection": "close" + } + }, + "upstream": "127.0.0.1:1982", + "route_id": "1", + "upstream_latency": 12, + "latency": 111.99998855591 +} +``` + ## 插件元数据 | 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述 | From c4521d0994932128ad787021ba7dddbe7f2747bf Mon Sep 17 00:00:00 2001 From: Derobukal Date: Thu, 11 Jan 2024 06:16:12 +0800 Subject: [PATCH 03/20] docs: fix certificate doc request url error (#10790) --- docs/en/latest/certificate.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/latest/certificate.md b/docs/en/latest/certificate.md index 146a153fbe25..1135506378d3 100644 --- a/docs/en/latest/certificate.md +++ b/docs/en/latest/certificate.md @@ -65,7 +65,7 @@ curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f13 Send a request to verify: ```shell -curl --resolve 'test.com:9443:127.0.0.1' https://test.com:9443/hello -k -vvv +curl --resolve 'test.com:9443:127.0.0.1' https://test.com:9443/get -k -vvv * Added test.com:9443:127.0.0.1 to DNS cache * About to connect() to test.com port 9443 (#0) From 4f9b59aadb89931cf8c1ca304dba606e1bdc02ce Mon Sep 17 00:00:00 2001 From: baiyun <337531158@qq.com> Date: Thu, 11 Jan 2024 06:19:03 +0800 Subject: [PATCH 04/20] docs: use shell instead of python to configure ssls resources (#10773) --- docs/en/latest/mtls.md | 145 +++++++++++++++++++--------------------- docs/zh/latest/mtls.md | 146 +++++++++++++++++++---------------------- 2 files changed, 136 insertions(+), 155 deletions(-) diff --git a/docs/en/latest/mtls.md b/docs/en/latest/mtls.md index 25a1747308e5..02e5e4b4550e 100644 --- a/docs/en/latest/mtls.md +++ b/docs/en/latest/mtls.md @@ -108,52 +108,69 @@ We provide a [tutorial](./tutorials/client-to-apisix-mtls.md) that explains in d When configuring `ssl`, use parameter `client.ca` and `client.depth` to configure the root CA that signing client certificates and the max length of certificate chain. Please refer to [Admin API](./admin-api.md#ssl) for details. -Here is an example Python script to create SSL with mTLS (id is `1`, changes admin API url if needed): - -```python title="create-ssl.py" -#!/usr/bin/env python -# coding: utf-8 -import sys -# sudo pip install requests -import requests - -if len(sys.argv) < 4: - print("bad argument") - sys.exit(1) -with open(sys.argv[1]) as f: - cert = f.read() -with open(sys.argv[2]) as f: - key = f.read() -sni = sys.argv[3] -api_key = "edd1c9f034335f136f87ad84b625c8f1" # Change it - -reqParam = { - "cert": cert, - "key": key, - "snis": [sni], -} -if len(sys.argv) >= 5: - print("Setting mTLS") - reqParam["client"] = {} - with open(sys.argv[4]) as f: - clientCert = f.read() - reqParam["client"]["ca"] = clientCert - if len(sys.argv) >= 6: - reqParam["client"]["depth"] = int(sys.argv[5]) -resp = requests.put("http://127.0.0.1:9180/apisix/admin/ssls/1", json=reqParam, headers={ - "X-API-KEY": api_key, -}) -print(resp.status_code) -print(resp.text) +Here is an example shell script to create SSL with mTLS (id is `1`, changes admin API url if needed): + +```shell +curl http://127.0.0.1:9180/apisix/admin/ssls/1 \ +-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "cert": "'"$(cat t/certs/mtls_server.crt)"'", + "key": "'"$(cat t/certs/mtls_server.key)"'", + "snis": [ + "admin.apisix.dev" + ], + "client": { + "ca": "'"$(cat t/certs/mtls_ca.crt)"'", + "depth": 10 + } +}' ``` -Create SSL: +Send a request to verify: ```bash -./create-ssl.py ./server.pem ./server.key 'mtls.test.com' ./client_ca.pem 10 - -# test it curl --resolve 'mtls.test.com::' "https://:/hello" -k --cert ./client.pem --key ./client.key + +* Added admin.apisix.dev:9443:127.0.0.1 to DNS cache +* Hostname admin.apisix.dev was found in DNS cache +* Trying 127.0.0.1:9443... +* Connected to admin.apisix.dev (127.0.0.1) port 9443 (#0) +* ALPN: offers h2 +* ALPN: offers http/1.1 +* CAfile: t/certs/mtls_ca.crt +* CApath: none +* [CONN-0-0][CF-SSL] (304) (OUT), TLS handshake, Client hello (1): +* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Server hello (2): +* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Unknown (8): +* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Request CERT (13): +* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Certificate (11): +* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, CERT verify (15): +* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Finished (20): +* [CONN-0-0][CF-SSL] (304) (OUT), TLS handshake, Certificate (11): +* [CONN-0-0][CF-SSL] (304) (OUT), TLS handshake, CERT verify (15): +* [CONN-0-0][CF-SSL] (304) (OUT), TLS handshake, Finished (20): +* SSL connection using TLSv1.3 / AEAD-AES256-GCM-SHA384 +* ALPN: server accepted h2 +* Server certificate: +* subject: C=cn; ST=GuangDong; L=ZhuHai; CN=admin.apisix.dev; OU=ops +* start date: Dec 1 10:17:24 2022 GMT +* expire date: Aug 18 10:17:24 2042 GMT +* subjectAltName: host "admin.apisix.dev" matched cert's "admin.apisix.dev" +* issuer: C=cn; ST=GuangDong; L=ZhuHai; CN=ca.apisix.dev; OU=ops +* SSL certificate verify ok. +* Using HTTP2, server supports multiplexing +* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 +* h2h3 [:method: GET] +* h2h3 [:path: /hello] +* h2h3 [:scheme: https] +* h2h3 [:authority: admin.apisix.dev:9443] +* h2h3 [user-agent: curl/7.87.0] +* h2h3 [accept: */*] +* Using Stream ID: 1 (easy handle 0x13000bc00) +> GET /hello HTTP/2 +> Host: admin.apisix.dev:9443 +> user-agent: curl/7.87.0 +> accept: */* ``` Please make sure that the SNI fits the certificate domain. @@ -170,41 +187,15 @@ When configuring `upstreams`, we could use parameter `tls.client_cert` and `tls. This feature requires APISIX to run on [APISIX-Runtime](./FAQ.md#how-do-i-build-the-apisix-runtime-environment). -Here is a similar Python script to patch a existed upstream with mTLS (changes admin API url if needed): - -```python title="patch_upstream_mtls.py" -#!/usr/bin/env python -# coding: utf-8 -import sys -# sudo pip install requests -import requests - -if len(sys.argv) < 4: - print("bad argument") - sys.exit(1) -with open(sys.argv[2]) as f: - cert = f.read() -with open(sys.argv[3]) as f: - key = f.read() -id = sys.argv[1] -api_key = "edd1c9f034335f136f87ad84b625c8f1" # Change it - -reqParam = { - "tls": { - "client_cert": cert, - "client_key": key, - }, -} - -resp = requests.patch("http://127.0.0.1:9180/apisix/admin/upstreams/"+id, json=reqParam, headers={ - "X-API-KEY": api_key, -}) -print(resp.status_code) -print(resp.text) -``` - -Patch existed upstream with id `testmtls`: +Here is a similar shell script to patch a existed upstream with mTLS (changes admin API url if needed): -```bash -./patch_upstream_mtls.py testmtls ./client.pem ./client.key +```shell +curl http://127.0.0.1:9180/apisix/admin/upstreams/1 \ +-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PATCH -d ' +{ + "tls": { + "client_cert": "'"$(cat t/certs/mtls_client.crt)"'", + "client_key": "'"$(cat t/certs/mtls_client.key)"'" + } +}' ``` diff --git a/docs/zh/latest/mtls.md b/docs/zh/latest/mtls.md index c96f48f2ad62..ad098c460d29 100644 --- a/docs/zh/latest/mtls.md +++ b/docs/zh/latest/mtls.md @@ -103,52 +103,68 @@ apisix: 在配置 `ssl` 资源时,同时需要配置 `client.ca` 和 `client.depth` 参数,分别代表为客户端证书签名的 CA 列表,和证书链的最大深度。可参考:[SSL API 文档](./admin-api.md#ssl)。 -下面是一个可用于生成带双向认证配置的 SSL 资源的 Python 脚本示例。如果需要,可修改 API 地址、API Key 和 SSL 资源的 ID。 - -```python title="create-ssl.py" -#!/usr/bin/env python -# coding: utf-8 -import sys -# sudo pip install requests -import requests - -if len(sys.argv) < 4: - print("bad argument") - sys.exit(1) -with open(sys.argv[1]) as f: - cert = f.read() -with open(sys.argv[2]) as f: - key = f.read() -sni = sys.argv[3] -api_key = "edd1c9f034335f136f87ad84b625c8f1" # Change it - -reqParam = { - "cert": cert, - "key": key, - "snis": [sni], -} -if len(sys.argv) >= 5: - print("Setting mTLS") - reqParam["client"] = {} - with open(sys.argv[4]) as f: - clientCert = f.read() - reqParam["client"]["ca"] = clientCert - if len(sys.argv) >= 6: - reqParam["client"]["depth"] = int(sys.argv[5]) -resp = requests.put("http://127.0.0.1:9180/apisix/admin/ssls/1", json=reqParam, headers={ - "X-API-KEY": api_key, -}) -print(resp.status_code) -print(resp.text) +下面是一个可用于生成带双向认证配置的 SSL 资源的 shell 脚本示例(如果需要,可修改 API 地址、API Key 和 SSL 资源的 ID。): + +```shell +curl http://127.0.0.1:9180/apisix/admin/ssls/1 \ +-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' + "cert": "'"$(cat t/certs/mtls_server.crt)"'", + "key": "'"$(cat t/certs/mtls_server.key)"'", + "snis": [ + "admin.apisix.dev" + ], + "client": { + "ca": "'"$(cat t/certs/mtls_ca.crt)"'", + "depth": 10 + } +}' ``` -使用上述 Python 脚本创建 SSL 资源: +测试: ```bash -./create-ssl.py ./server.pem ./server.key 'mtls.test.com' ./client_ca.pem 10 - -# 测试 -curl --resolve 'mtls.test.com::' "https://:/hello" -k --cert ./client.pem --key ./client.key +curl -vvv --resolve 'admin.apisix.dev:9443:127.0.0.1' https://admin.apisix.dev:9443/hello --cert t/certs/mtls_client.crt --key t/certs/mtls_client.key --cacert t/certs/mtls_ca.crt + +* Added admin.apisix.dev:9443:127.0.0.1 to DNS cache +* Hostname admin.apisix.dev was found in DNS cache +* Trying 127.0.0.1:9443... +* Connected to admin.apisix.dev (127.0.0.1) port 9443 (#0) +* ALPN: offers h2 +* ALPN: offers http/1.1 +* CAfile: t/certs/mtls_ca.crt +* CApath: none +* [CONN-0-0][CF-SSL] (304) (OUT), TLS handshake, Client hello (1): +* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Server hello (2): +* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Unknown (8): +* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Request CERT (13): +* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Certificate (11): +* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, CERT verify (15): +* [CONN-0-0][CF-SSL] (304) (IN), TLS handshake, Finished (20): +* [CONN-0-0][CF-SSL] (304) (OUT), TLS handshake, Certificate (11): +* [CONN-0-0][CF-SSL] (304) (OUT), TLS handshake, CERT verify (15): +* [CONN-0-0][CF-SSL] (304) (OUT), TLS handshake, Finished (20): +* SSL connection using TLSv1.3 / AEAD-AES256-GCM-SHA384 +* ALPN: server accepted h2 +* Server certificate: +* subject: C=cn; ST=GuangDong; L=ZhuHai; CN=admin.apisix.dev; OU=ops +* start date: Dec 1 10:17:24 2022 GMT +* expire date: Aug 18 10:17:24 2042 GMT +* subjectAltName: host "admin.apisix.dev" matched cert's "admin.apisix.dev" +* issuer: C=cn; ST=GuangDong; L=ZhuHai; CN=ca.apisix.dev; OU=ops +* SSL certificate verify ok. +* Using HTTP2, server supports multiplexing +* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 +* h2h3 [:method: GET] +* h2h3 [:path: /hello] +* h2h3 [:scheme: https] +* h2h3 [:authority: admin.apisix.dev:9443] +* h2h3 [user-agent: curl/7.87.0] +* h2h3 [accept: */*] +* Using Stream ID: 1 (easy handle 0x13000bc00) +> GET /hello HTTP/2 +> Host: admin.apisix.dev:9443 +> user-agent: curl/7.87.0 +> accept: */* ``` 注意,测试时使用的域名需要符合证书的参数。 @@ -165,41 +181,15 @@ curl --resolve 'mtls.test.com::' "https:// Date: Thu, 11 Jan 2024 10:38:57 +0800 Subject: [PATCH 05/20] ci: remove unnecessary ci file (#10792) --- .github/workflows/cli-master.yml | 53 -------------- ci/common.sh | 1 - ci/linux_apisix_master_luarocks_runner.sh | 84 ----------------------- 3 files changed, 138 deletions(-) delete mode 100644 .github/workflows/cli-master.yml delete mode 100755 ci/linux_apisix_master_luarocks_runner.sh diff --git a/.github/workflows/cli-master.yml b/.github/workflows/cli-master.yml deleted file mode 100644 index b65141ed6ce7..000000000000 --- a/.github/workflows/cli-master.yml +++ /dev/null @@ -1,53 +0,0 @@ -name: CLI Test (master) - -on: - push: - branches: [master] - paths-ignore: - - 'docs/**' - - '**/*.md' - -concurrency: - group: ${{ github.workflow }}-${{ github.ref == 'refs/heads/master' && github.run_number || github.ref }} - cancel-in-progress: true - -permissions: - contents: read - -jobs: - build: - strategy: - fail-fast: false - matrix: - job_name: - - linux_apisix_master_luarocks - runs-on: ubuntu-20.04 - timeout-minutes: 15 - env: - OPENRESTY_VERSION: default - - steps: - - name: Check out code - uses: actions/checkout@v4 - with: - submodules: recursive - - - name: Cache deps - uses: actions/cache@v3 - env: - cache-name: cache-deps - with: - path: deps - key: ${{ runner.os }}-${{ env.cache-name }}-${{ matrix.job_name }}-${{ hashFiles('apisix-master-0.rockspec') }} - - - name: Linux launch common services - run: | - project_compose_ci=ci/pod/docker-compose.common.yml make ci-env-up - - - name: Linux Install - run: | - sudo --preserve-env=OPENRESTY_VERSION \ - ./ci/${{ matrix.job_name }}_runner.sh do_install - - - name: Linux Script - run: sudo ./ci/${{ matrix.job_name }}_runner.sh script diff --git a/ci/common.sh b/ci/common.sh index 7609bcfd02d7..9aa132af1c06 100644 --- a/ci/common.sh +++ b/ci/common.sh @@ -23,7 +23,6 @@ export_version_info() { export_or_prefix() { export OPENRESTY_PREFIX="/usr/local/openresty" - export APISIX_MAIN="https://raw.githubusercontent.com/apache/apisix/master/apisix-master-0.rockspec" export PATH=$OPENRESTY_PREFIX/nginx/sbin:$OPENRESTY_PREFIX/luajit/bin:$OPENRESTY_PREFIX/bin:$PATH export OPENSSL_PREFIX=$OPENRESTY_PREFIX/openssl3 export OPENSSL_BIN=$OPENSSL_PREFIX/bin/openssl diff --git a/ci/linux_apisix_master_luarocks_runner.sh b/ci/linux_apisix_master_luarocks_runner.sh deleted file mode 100755 index 4137f4399e3b..000000000000 --- a/ci/linux_apisix_master_luarocks_runner.sh +++ /dev/null @@ -1,84 +0,0 @@ -#!/usr/bin/env bash -# -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -. ./ci/common.sh - -do_install() { - linux_get_dependencies - install_brotli - - export_or_prefix - - ./ci/linux-install-openresty.sh - ./utils/linux-install-luarocks.sh - ./ci/linux-install-etcd-client.sh -} - -script() { - export_or_prefix - openresty -V - - sudo rm -rf /usr/local/apisix - - # run the test case in an empty folder - mkdir tmp && cd tmp - cp -r ../utils ./ - - # install APISIX by luarocks - luarocks install $APISIX_MAIN > build.log 2>&1 || (cat build.log && exit 1) - cp ../bin/apisix /usr/local/bin/apisix - - # show install files - luarocks show apisix - - sudo PATH=$PATH apisix help - sudo PATH=$PATH apisix init - sudo PATH=$PATH apisix start - sudo PATH=$PATH apisix quit - for i in {1..10} - do - if [ ! -f /usr/local/apisix/logs/nginx.pid ];then - break - fi - sleep 0.3 - done - sudo PATH=$PATH apisix start - sudo PATH=$PATH apisix stop - - # apisix cli test - # todo: need a more stable way - - grep '\[error\]' /usr/local/apisix/logs/error.log > /tmp/error.log | true - if [ -s /tmp/error.log ]; then - echo "=====found error log=====" - cat /usr/local/apisix/logs/error.log - exit 1 - fi -} - -case_opt=$1 -shift - -case ${case_opt} in -do_install) - do_install "$@" - ;; -script) - script "$@" - ;; -esac From 21599ac41305843ca1e8abe8b6ea413a6b87ce1f Mon Sep 17 00:00:00 2001 From: Liu Wei Date: Thu, 11 Jan 2024 14:41:51 +0800 Subject: [PATCH 06/20] ci: fix start dubbo error (#10800) --- .github/workflows/build.yml | 1 + .github/workflows/centos7-ci.yml | 1 + .github/workflows/gm-cron.yaml | 1 + .github/workflows/redhat-ci.yaml | 1 + 4 files changed, 4 insertions(+) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 2c1bbcc9cd71..677a709ebbd2 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -141,6 +141,7 @@ jobs: - name: Start Dubbo Backend if: matrix.os_name == 'linux_openresty' && (steps.test_env.outputs.type == 'plugin' || steps.test_env.outputs.type == 'last') run: | + sudo apt update sudo apt install -y maven cd t/lib/dubbo-backend mvn package diff --git a/.github/workflows/centos7-ci.yml b/.github/workflows/centos7-ci.yml index 9f98a363f5f3..b4adda54759c 100644 --- a/.github/workflows/centos7-ci.yml +++ b/.github/workflows/centos7-ci.yml @@ -99,6 +99,7 @@ jobs: - name: Start Dubbo Backend run: | + sudo apt update sudo apt install -y maven cd t/lib/dubbo-backend mvn package diff --git a/.github/workflows/gm-cron.yaml b/.github/workflows/gm-cron.yaml index b90abaf14b8c..67c3f65b74a0 100644 --- a/.github/workflows/gm-cron.yaml +++ b/.github/workflows/gm-cron.yaml @@ -124,6 +124,7 @@ jobs: - name: Start Dubbo Backend if: steps.test_env.outputs.type == 'plugin' run: | + sudo apt update sudo apt install -y maven cd t/lib/dubbo-backend mvn package diff --git a/.github/workflows/redhat-ci.yaml b/.github/workflows/redhat-ci.yaml index 547bfb1f14dc..35f608b078b9 100644 --- a/.github/workflows/redhat-ci.yaml +++ b/.github/workflows/redhat-ci.yaml @@ -95,6 +95,7 @@ jobs: - name: Start Dubbo Backend run: | + sudo apt update sudo apt install -y maven cd t/lib/dubbo-backend mvn package From 52c368d1de02a591521049180ffb93f626ff65fb Mon Sep 17 00:00:00 2001 From: baiyun <337531158@qq.com> Date: Fri, 12 Jan 2024 10:42:49 +0800 Subject: [PATCH 07/20] docs: add Chinese translation for the new jwe-decrypt plugin doc (#10809) * docs: add Chinese translation for the new jwe-decrypt plugin doc --- docs/zh/latest/config.json | 1 + docs/zh/latest/plugins/jwe-decrypt.md | 183 ++++++++++++++++++++++++++ 2 files changed, 184 insertions(+) create mode 100644 docs/zh/latest/plugins/jwe-decrypt.md diff --git a/docs/zh/latest/config.json b/docs/zh/latest/config.json index 2f8c3852527e..07e9d30150d9 100644 --- a/docs/zh/latest/config.json +++ b/docs/zh/latest/config.json @@ -91,6 +91,7 @@ "plugins/wolf-rbac", "plugins/key-auth", "plugins/jwt-auth", + "plugins/jwe-decrypt", "plugins/basic-auth", "plugins/openid-connect", "plugins/hmac-auth", diff --git a/docs/zh/latest/plugins/jwe-decrypt.md b/docs/zh/latest/plugins/jwe-decrypt.md new file mode 100644 index 000000000000..38f39158b018 --- /dev/null +++ b/docs/zh/latest/plugins/jwe-decrypt.md @@ -0,0 +1,183 @@ +--- +title: jwe-decrypt +keywords: + - Apache APISIX + - API 网关 + - APISIX 插件 + - JWE Decrypt + - jwe-decrypt +description: 本文档包含了关于 APISIX jwe-decrypt 插件的相关信息。 +--- + + + +## 描述 + +`jwe-decrypt` 插件,用于解密 APISIX [Service](../terminology/service.md) 或者 [Route](../terminology/route.md) 请求中的 [JWE](https://datatracker.ietf.org/doc/html/rfc7516) 授权请求头。 + +插件增加了一个 `/apisix/plugin/jwe/encrypt` 的内部 API,提供给 JWE 加密使用。解密时,秘钥应该配置在 [Consumer](../terminology/consumer.md)内。 + +## 属性 + +Consumer 配置: + +| 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述 | +|---------------|---------|-------|-------|-----|----------------------------------------------------------------------| +| key | string | True | | | Consumer 的唯一 key | +| secret | string | True | | | 解密密钥。秘钥可以使用 [Secret](../terminology/secret.md) 资源保存在密钥管理服务中(最小 32 位) | +| is_base64_encoded | boolean | False | false | | 如果密钥是 Base64 编码,则需要配置为 `true` | + +Route 配置: + +| 名称 | 类型 | 必选项 | 默认值 | 描述 | +|----------------|---------|-------|---------------|----------------------------------------------------------------------------| +| header | string | False | authorization | 指定请求头,用于获取加密令牌 | +| forward_header | string | False | authorization | 传递给 Upstream 的请求头名称 | +| strict | boolean | False | true | 如果为配置为 true,请求中缺失 JWE token 则抛出 `403` 异常。如果为 `false`, 在缺失 JWE token 的情况下不会抛出异常 | + +## 启用插件 + +首先,基于 `jwe-decrypt` 插件创建一个 Consumer,并且配置解密密钥: + +```shell +curl http://127.0.0.1:9180/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "username": "jack", + "plugins": { + "jwe-decrypt": { + "key": "user-key", + "secret": "key-length-must-be-at-least-32-bytes" + } + } +}' +``` + +下一步,基于 `jwe-decrypt` 插件创建一个路由,用于解密 authorization 请求头: + +```shell +curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "methods": ["GET"], + "uri": "/anything*", + "plugins": { + "jwe-decrypt": {} + }, + "upstream": { + "type": "roundrobin", + "nodes": { + "httpbin.org:80": 1 + } + } +}' +``` + +### 使用 JWE 加密数据 + +该插件创建了一个内部的 API `/apisix/plugin/jwe/encrypt` 以使用 JWE 进行加密。要公开它,需要创建一个对应的路由,并启用 [public-api](public-api.md) 插件: + +```shell +curl http://127.0.0.1:9180/apisix/admin/routes/jwenew -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "uri": "/apisix/plugin/jwe/encrypt", + "plugins": { + "public-api": {} + } +}' +``` + +向 API 发送一个请求,将 Consumer 中配置的密钥,以参数的方式传递给 URI,用于加密 payload 中的一些数据。 + +```shell +curl -G --data-urlencode 'payload={"uid":10000,"uname":"test"}' 'http://127.0.0.1:9080/apisix/plugin/jwe/encrypt?key=user-key' -i +``` + +您应该看到类似于如下内容的响应结果,其中 JWE 加密的数据位于响应体中: + +``` +HTTP/1.1 200 OK +Date: Mon, 25 Sep 2023 02:38:16 GMT +Content-Type: text/plain; charset=utf-8 +Transfer-Encoding: chunked +Connection: keep-alive +Server: APISIX/3.5.0 +Apisix-Plugins: public-api + +eyJhbGciOiJkaXIiLCJraWQiOiJ1c2VyLWtleSIsImVuYyI6IkEyNTZHQ00ifQ..MTIzNDU2Nzg5MDEy.hfzMJ0YfmbMcJ0ojgv4PYAHxPjlgMivmv35MiA.7nilnBt2dxLR_O6kf-HQUA +``` + +### 使用 JWE 解密数据 + +将加密数据放在 `Authorization` 请求头中,向 API 发起请求: + +```shell +curl http://127.0.0.1:9080/anything/hello -H 'Authorization: eyJhbGciOiJkaXIiLCJraWQiOiJ1c2VyLWtleSIsImVuYyI6IkEyNTZHQ00ifQ..MTIzNDU2Nzg5MDEy.hfzMJ0YfmbMcJ0ojgv4PYAHxPjlgMivmv35MiA.7nilnBt2dxLR_O6kf-HQUA' -i +``` + +您应该可以看到类似于如下的响应内容,其中 `Authorization` 响应头显示了有效的解密内容: + +``` +HTTP/1.1 200 OK +Content-Type: application/json +Content-Length: 452 +Connection: keep-alive +Date: Mon, 25 Sep 2023 02:38:59 GMT +Access-Control-Allow-Origin: * +Access-Control-Allow-Credentials: true +Server: APISIX/3.5.0 +Apisix-Plugins: jwe-decrypt + +{ + "args": {}, + "data": "", + "files": {}, + "form": {}, + "headers": { + "Accept": "*/*", + "Authorization": "{\"uid\":10000,\"uname\":\"test\"}", + "Host": "127.0.0.1", + "User-Agent": "curl/8.1.2", + "X-Amzn-Trace-Id": "Root=1-6510f2c3-1586ec011a22b5094dbe1896", + "X-Forwarded-Host": "127.0.0.1" + }, + "json": null, + "method": "GET", + "origin": "127.0.0.1, 119.143.79.94", + "url": "http://127.0.0.1/anything/hello" +} +``` + +## 删除插件 + +要删除 `jwe-decrypt` 插件,您可以从插件配置中删除插件对应的 JSON 配置,APISIX 会自动加载,您不需要重新启动即可生效。 + +```shell +curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "methods": ["GET"], + "uri": "/anything*", + "plugins": {}, + "upstream": { + "type": "roundrobin", + "nodes": { + "httpbin.org:80": 1 + } + } +}' +``` From 686a0de14c005805a8e4e71b85769aa898c07fd7 Mon Sep 17 00:00:00 2001 From: baiyun <337531158@qq.com> Date: Fri, 12 Jan 2024 13:57:51 +0800 Subject: [PATCH 08/20] docs: allow to use environment variables for limit-count plugin settings (#10804) --- docs/zh/latest/plugins/limit-count.md | 31 +++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/docs/zh/latest/plugins/limit-count.md b/docs/zh/latest/plugins/limit-count.md index dbacdfa00a20..79b3ac4fe680 100644 --- a/docs/zh/latest/plugins/limit-count.md +++ b/docs/zh/latest/plugins/limit-count.md @@ -250,6 +250,37 @@ curl -i http://127.0.0.1:9180/apisix/admin/routes/1 \ }' ``` +此外,插件中的属性值可以引用 APISIX 中的密钥。APISIX 当前支持两种存储密钥的方式 - [环境变量和 HashiCorp Vault](../terminology/secret.md)。 +如果您设置了环境变量 `REDIS_HOST` 和 `REDIS_PASSWORD` ,如下所示,您可以在插件配置中使用它们: + +```shell +curl -i http://127.0.0.1:9180/apisix/admin/routes/1 \ +-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "uri": "/index.html", + "plugins": { + "limit-count": { + "count": 2, + "time_window": 60, + "rejected_code": 503, + "key": "remote_addr", + "policy": "redis", + "redis_host": "$ENV://REDIS_HOST", + "redis_port": 6379, + "redis_password": "$ENV://REDIS_PASSWORD", + "redis_database": 1, + "redis_timeout": 1001 + } + }, + "upstream": { + "type": "roundrobin", + "nodes": { + "127.0.0.1:1980": 1 + } + } +}' +``` + ## 测试插件 在上文提到的配置中,其限制了 60 秒内请求只能访问 2 次,可通过如下 `curl` 命令测试请求访问: From bfb9a98bacfb75524effed3fd23fe73d5a8a73f6 Mon Sep 17 00:00:00 2001 From: Warnar Boekkooi <88374436+boekkooi-lengoo@users.noreply.github.com> Date: Fri, 12 Jan 2024 07:08:23 +0100 Subject: [PATCH 09/20] fix: unnecessary YAML Config reloads (#9065) --- apisix/core/config_yaml.lua | 21 ++++++++++----------- t/cli/test_standalone.sh | 17 +++++++++++++++++ 2 files changed, 27 insertions(+), 11 deletions(-) diff --git a/apisix/core/config_yaml.lua b/apisix/core/config_yaml.lua index 8511eb583fa7..ce8c8321663a 100644 --- a/apisix/core/config_yaml.lua +++ b/apisix/core/config_yaml.lua @@ -63,7 +63,7 @@ local mt = { local apisix_yaml -local apisix_yaml_ctime +local apisix_yaml_mtime local function read_apisix_yaml(premature, pre_mtime) if premature then return @@ -74,9 +74,8 @@ local function read_apisix_yaml(premature, pre_mtime) return end - -- log.info("change: ", json.encode(attributes)) - local last_change_time = attributes.change - if apisix_yaml_ctime == last_change_time then + local last_modification_time = attributes.modification + if apisix_yaml_mtime == last_modification_time then return end @@ -114,7 +113,7 @@ local function read_apisix_yaml(premature, pre_mtime) end apisix_yaml = apisix_yaml_new - apisix_yaml_ctime = last_change_time + apisix_yaml_mtime = last_modification_time log.warn("config file ", apisix_yaml_path, " reloaded.") end @@ -124,12 +123,12 @@ local function sync_data(self) return nil, "missing 'key' arguments" end - if not apisix_yaml_ctime then + if not apisix_yaml_mtime then log.warn("wait for more time") return nil, "failed to read local file " .. apisix_yaml_path end - if self.conf_version == apisix_yaml_ctime then + if self.conf_version == apisix_yaml_mtime then return true end @@ -138,7 +137,7 @@ local function sync_data(self) if not items then self.values = new_tab(8, 0) self.values_hash = new_tab(0, 8) - self.conf_version = apisix_yaml_ctime + self.conf_version = apisix_yaml_mtime return true end @@ -155,7 +154,7 @@ local function sync_data(self) self.values_hash = new_tab(0, 1) local item = items - local conf_item = {value = item, modifiedIndex = apisix_yaml_ctime, + local conf_item = {value = item, modifiedIndex = apisix_yaml_mtime, key = "/" .. self.key} local data_valid = true @@ -202,7 +201,7 @@ local function sync_data(self) end local key = item.id or "arr_" .. i - local conf_item = {value = item, modifiedIndex = apisix_yaml_ctime, + local conf_item = {value = item, modifiedIndex = apisix_yaml_mtime, key = "/" .. self.key .. "/" .. key} if data_valid and self.item_schema then @@ -236,7 +235,7 @@ local function sync_data(self) end end - self.conf_version = apisix_yaml_ctime + self.conf_version = apisix_yaml_mtime return true end diff --git a/t/cli/test_standalone.sh b/t/cli/test_standalone.sh index a0d91c11c4a0..57b665294ce4 100755 --- a/t/cli/test_standalone.sh +++ b/t/cli/test_standalone.sh @@ -20,6 +20,7 @@ . ./t/cli/common.sh standalone() { + rm -f conf/apisix.yaml.link clean_up git checkout conf/apisix.yaml } @@ -138,3 +139,19 @@ if [ ! $code -eq 200 ]; then fi echo "passed: resolve variables in apisix.yaml conf success" + +# Avoid unnecessary config reloads +## Wait for a second else `st_ctime` won't increase +sleep 1 +expected_config_reloads=$(grep "config file $(pwd)/conf/apisix.yaml reloaded." logs/error.log | wc -l) + +## Create a symlink to change the link count and as a result `st_ctime` +ln conf/apisix.yaml conf/apisix.yaml.link +sleep 1 + +actual_config_reloads=$(grep "config file $(pwd)/conf/apisix.yaml reloaded." logs/error.log | wc -l) +if [ $expected_config_reloads -ne $actual_config_reloads ]; then + echo "failed: apisix.yaml was reloaded" + exit 1 +fi +echo "passed: apisix.yaml was not reloaded" From fbedec8cd28d42d785bc17074dabb9b1d0ea025c Mon Sep 17 00:00:00 2001 From: baiyun <337531158@qq.com> Date: Fri, 12 Jan 2024 15:10:14 +0800 Subject: [PATCH 10/20] docs: add Chinese translation for multi-auth plugin doc (#10812) --- docs/zh/latest/config.json | 3 +- docs/zh/latest/plugins/multi-auth.md | 159 +++++++++++++++++++++++++++ 2 files changed, 161 insertions(+), 1 deletion(-) create mode 100644 docs/zh/latest/plugins/multi-auth.md diff --git a/docs/zh/latest/config.json b/docs/zh/latest/config.json index 07e9d30150d9..9ff718574b79 100644 --- a/docs/zh/latest/config.json +++ b/docs/zh/latest/config.json @@ -98,7 +98,8 @@ "plugins/authz-casbin", "plugins/ldap-auth", "plugins/opa", - "plugins/forward-auth" + "plugins/forward-auth", + "plugins/multi-auth" ] }, { diff --git a/docs/zh/latest/plugins/multi-auth.md b/docs/zh/latest/plugins/multi-auth.md new file mode 100644 index 000000000000..d06c2b2346f5 --- /dev/null +++ b/docs/zh/latest/plugins/multi-auth.md @@ -0,0 +1,159 @@ +--- +title: multi-auth +keywords: + - Apache APISIX + - API 网关 + - Plugin + - Multi Auth + - multi-auth +description: 本文档包含有关 Apache APISIX multi-auth 插件的信息。 +--- + + + +## 描述 + +插件 `multi-auth` 用于向 `Route` 或者 `Service` 中,添加多种身份验证方式。它支持 `auth` 类型的插件。您可以使用 `multi-auth` 插件,来组合不同的身份认证方式。 + +插件通过迭代 `auth_plugins` 属性指定的插件列表,提供了灵活的身份认证机制。它允许多个 `Consumer` 在使用不同身份验证方式时共享相同的 `Route` ,同时。例如:一个 Consumer 使用 basic 认证,而另一个消费者使用 JWT 认证。 + +## 属性 + +For Route: + +| 名称 | 类型 | 必选项 | 默认值 | 描述 | +|--------------|-------|------|-----|-------------------------| +| auth_plugins | array | True | - | 添加需要支持的认证插件。至少需要 2 个插件。 | + +## 启用插件 + +要启用插件,您必须创建两个或多个具有不同身份验证插件配置的 Consumer: + +首先创建一个 Consumer 使用 basic-auth 插件: + +```shell +curl http://127.0.0.1:9180/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "username": "foo1", + "plugins": { + "basic-auth": { + "username": "foo1", + "password": "bar1" + } + } +}' +``` + +然后再创建一个 Consumer 使用 key-auth 插件: + +```shell +curl http://127.0.0.1:9180/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "username": "foo2", + "plugins": { + "key-auth": { + "key": "auth-one" + } + } +}' +``` + +您也可以使用 [APISIX Dashboard](/docs/dashboard/USER_GUIDE) 通过 web UI 来完成操作。 + +创建 Consumer 之后,您可以配置一个路由或服务来验证请求: + +```shell +curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "methods": ["GET"], + "uri": "/hello", + "plugins": { + "multi-auth":{ + "auth_plugins":[ + { + "basic-auth":{ } + }, + { + "key-auth":{ + "query":"apikey", + "hide_credentials":true, + "header":"apikey" + } + } + ] + } + }, + "upstream": { + "type": "roundrobin", + "nodes": { + "127.0.0.1:1980": 1 + } + } +}' +``` + +## 使用示例 + +如上所述配置插件后,您可以向对应的 API 发起一个请求,如下所示: + +请求开启 basic-auth 插件的 API + +```shell +curl -i -ufoo1:bar1 http://127.0.0.1:9080/hello +``` + +请求开启 key-auth 插件的 API + +```shell +curl http://127.0.0.2:9080/hello -H 'apikey: auth-one' -i +``` + +``` +HTTP/1.1 200 OK +... +hello, world +``` + +如果请求未授权,将会返回如下错误: + +```shell +HTTP/1.1 401 Unauthorized +... +{"message":"Authorization Failed"} +``` + +## 删除插件 + +要删除 `multi-auth` 插件,您可以从插件配置中删除插件对应的 JSON 配置,APISIX 会自动加载,您不需要重新启动即可生效。 + +```shell +curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "methods": ["GET"], + "uri": "/hello", + "plugins": {}, + "upstream": { + "type": "roundrobin", + "nodes": { + "127.0.0.1:1980": 1 + } + } +}' +``` From c11782a607db58298bc2f848fd3ff0ae63fb7e09 Mon Sep 17 00:00:00 2001 From: Warnar Boekkooi <88374436+boekkooi-lengoo@users.noreply.github.com> Date: Fri, 12 Jan 2024 08:54:53 +0100 Subject: [PATCH 11/20] feat: support `uri_arg_` when use `radixtree_uri_with_parameter` (#10645) --- apisix/core/ctx.lua | 8 ++ t/core/ctx_with_params.t | 153 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 161 insertions(+) create mode 100644 t/core/ctx_with_params.t diff --git a/apisix/core/ctx.lua b/apisix/core/ctx.lua index 6d77b43811ca..36b8788bd02d 100644 --- a/apisix/core/ctx.lua +++ b/apisix/core/ctx.lua @@ -274,6 +274,14 @@ do end end + elseif core_str.has_prefix(key, "uri_param_") then + -- `uri_param_` provides access to the uri parameters when using + -- radixtree_uri_with_parameter + if t._ctx.curr_req_matched then + local arg_key = sub_str(key, 11) + val = t._ctx.curr_req_matched[arg_key] + end + elseif core_str.has_prefix(key, "http_") then key = key:lower() key = re_gsub(key, "-", "_", "jo") diff --git a/t/core/ctx_with_params.t b/t/core/ctx_with_params.t new file mode 100644 index 000000000000..6bff7dc3adcd --- /dev/null +++ b/t/core/ctx_with_params.t @@ -0,0 +1,153 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +use t::APISIX 'no_plan'; + +repeat_each(1); +no_long_string(); +no_root_location(); +no_shuffle(); + +our $yaml_config = <<_EOC_; +apisix: + router: + http: 'radixtree_uri_with_parameter' +_EOC_ + +add_block_preprocessor(sub { + my ($block) = @_; + + if (!$block->yaml_config) { + $block->set_value("yaml_config", $yaml_config); + } +}); + +run_tests; + +__DATA__ + +=== TEST 1: add route and get `uri_param_` +--- config + location /t { + content_by_lua_block { + local t = require("lib.test_admin").test + local code, body = t('/apisix/admin/routes/1', + ngx.HTTP_PUT, + [[{ + "methods": ["GET"], + "plugins": { + "serverless-pre-function": { + "phase": "access", + "functions" : ["return function() ngx.log(ngx.INFO, \"uri_param_id: \", ngx.ctx.api_ctx.var.uri_param_id) end"] + } + }, + "upstream": { + "type": "roundrobin", + "nodes": { + "127.0.0.1:1980": 1 + } + }, + "uri": "/:id" + }]] + ) + + if code >= 300 then + ngx.status = code + end + ngx.say(body) + } + } +--- request +GET /t +--- response_body +passed + + + +=== TEST 2: `uri_param_id` exist (hello) +--- request +GET /hello +--- response_body +hello world +--- error_log +uri_param_id: hello + + + +=== TEST 3: `uri_param_id` exist (hello1) +--- request +GET /hello1 +--- response_body +hello1 world +--- error_log +uri_param_id: hello1 + + + +=== TEST 4: `uri_param_id` nonexisting route +--- request +GET /not_a_route +--- error_code: 404 +--- error_log +uri_param_id: not_a_route + + + +=== TEST 5: add route and get unknown `uri_param_id` +--- config + location /t { + content_by_lua_block { + local t = require("lib.test_admin").test + local code, body = t('/apisix/admin/routes/1', + ngx.HTTP_PUT, + [[{ + "methods": ["GET"], + "plugins": { + "serverless-pre-function": { + "phase": "access", + "functions" : ["return function() ngx.log(ngx.INFO, \"uri_param_id: \", ngx.ctx.api_ctx.var.uri_param_id) end"] + } + }, + "upstream": { + "type": "roundrobin", + "nodes": { + "127.0.0.1:1980": 1 + } + }, + "uri": "/hello" + }]] + ) + + if code >= 300 then + ngx.status = code + end + ngx.say(body) + } + } +--- request +GET /t +--- response_body +passed + + + +=== TEST 6: `uri_param_id` not in uri +--- request +GET /hello +--- response_body +hello world +--- error_log +uri_param_id: From 781e8f66409e838ff899662e5b44ddbf54d0fd7f Mon Sep 17 00:00:00 2001 From: Silent Date: Fri, 12 Jan 2024 13:27:34 +0530 Subject: [PATCH 12/20] fix(brotli-plugin): skip brotli compression for upstream compressed response (#10740) --- apisix/plugins/brotli.lua | 6 +++ docs/en/latest/plugins/brotli.md | 6 +++ t/lib/server.lua | 7 ++++ t/plugin/brotli.t | 65 ++++++++++++++++++++++++++++++++ 4 files changed, 84 insertions(+) diff --git a/apisix/plugins/brotli.lua b/apisix/plugins/brotli.lua index 9b4954aea6ac..4482fc0cd8dc 100644 --- a/apisix/plugins/brotli.lua +++ b/apisix/plugins/brotli.lua @@ -163,6 +163,12 @@ function _M.header_filter(conf, ctx) return end + local content_encoded = ngx_header["Content-Encoding"] + if content_encoded then + -- Don't compress if Content-Encoding is present in upstream data + return + end + local types = conf.types local content_type = ngx_header["Content-Type"] if not content_type then diff --git a/docs/en/latest/plugins/brotli.md b/docs/en/latest/plugins/brotli.md index eaf9cb2999dc..196fdb520234 100644 --- a/docs/en/latest/plugins/brotli.md +++ b/docs/en/latest/plugins/brotli.md @@ -47,6 +47,12 @@ sudo sh -c "echo /usr/local/brotli/lib >> /etc/ld.so.conf.d/brotli.conf" sudo ldconfig ``` +:::caution + +If the upstream is returning a compressed response, then the Brotli plugin won't be able to compress it. + +::: + ## Attributes | Name | Type | Required | Default | Valid values | Description | diff --git a/t/lib/server.lua b/t/lib/server.lua index c7386e1b7e84..7cc8101a3af7 100644 --- a/t/lib/server.lua +++ b/t/lib/server.lua @@ -591,4 +591,11 @@ function _M.clickhouse_logger_server() end +function _M.mock_compressed_upstream_response() + local s = "compressed_response" + ngx.header['Content-Encoding'] = 'gzip' + ngx.say(s) +end + + return _M diff --git a/t/plugin/brotli.t b/t/plugin/brotli.t index 5f7c6cae39a2..f0f69315430f 100644 --- a/t/plugin/brotli.t +++ b/t/plugin/brotli.t @@ -718,3 +718,68 @@ passed } --- response_body ok + + + +=== TEST 30: mock upstream compressed response +--- config + location /t { + content_by_lua_block { + local t = require("lib.test_admin").test + local code, body = t('/apisix/admin/routes/1', + ngx.HTTP_PUT, + [[{ + "uri": "/mock_compressed_upstream_response", + "upstream": { + "type": "roundrobin", + "nodes": { + "127.0.0.1:1980": 1 + } + }, + "plugins": { + "brotli": { + "types": "*" + } + } + }]] + ) + + if code >= 300 then + ngx.status = code + end + ngx.say(body) + } +} +--- response_body +passed + + + +=== TEST 31: hit - skip brotli compression of compressed upstream response +--- config + location /t { + content_by_lua_block { + local http = require "resty.http" + local uri = "http://127.0.0.1:" .. ngx.var.server_port + .. "/mock_compressed_upstream_response" + local httpc = http.new() + local req_body = ("abcdf01234"):rep(1024) + local res, err = httpc:request_uri(uri, + {method = "POST", headers = {["Accept-Encoding"] = "gzip, br"}, body = req_body}) + if not res then + ngx.say(err) + return + end + if res.headers["Content-Encoding"] == 'gzip' then + ngx.say("ok") + end + } + } +--- request +GET /t +--- more_headers +Accept-Encoding: gzip, br +Vary: upstream +Content-Type: text/html +--- response_body +ok From 2bc2cfc5380d94d8f1684de3ee8fa0f0cf86a6c5 Mon Sep 17 00:00:00 2001 From: baiyun <337531158@qq.com> Date: Mon, 15 Jan 2024 10:37:31 +0800 Subject: [PATCH 13/20] docs: add Chinese translation for the new brotil plugin (#10814) --- docs/zh/latest/config.json | 1 + docs/zh/latest/plugins/brotli.md | 123 +++++++++++++++++++++++++++++++ 2 files changed, 124 insertions(+) create mode 100644 docs/zh/latest/plugins/brotli.md diff --git a/docs/zh/latest/config.json b/docs/zh/latest/config.json index 9ff718574b79..a330bcbe6601 100644 --- a/docs/zh/latest/config.json +++ b/docs/zh/latest/config.json @@ -62,6 +62,7 @@ "plugins/redirect", "plugins/echo", "plugins/gzip", + "plugins/brotli", "plugins/real-ip", "plugins/server-info", "plugins/ext-plugin-pre-req", diff --git a/docs/zh/latest/plugins/brotli.md b/docs/zh/latest/plugins/brotli.md new file mode 100644 index 000000000000..95f85a4e57f4 --- /dev/null +++ b/docs/zh/latest/plugins/brotli.md @@ -0,0 +1,123 @@ +--- +title: brotli +keywords: + - Apache APISIX + - API 网关 + - Plugin + - brotli +description: 这个文档包含有关 Apache APISIX brotli 插件的相关信息。 +--- + + + +## 描述 + +`brotli` 插件可以动态的设置 Nginx 中的 [brotli](https://github.com/google/ngx_brotli) 的行为。 + +## 前提条件 + +该插件依赖 brotli 共享库。 + +如下是构建和安装 brotli 共享库的示例脚本: + +``` shell +wget https://github.com/google/brotli/archive/refs/tags/v1.1.0.zip +unzip v1.1.0.zip +cd brotli-1.1.0 && mkdir build && cd build +cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local/brotli .. +sudo cmake --build . --config Release --target install +sudo sh -c "echo /usr/local/brotli/lib >> /etc/ld.so.conf.d/brotli.conf" +sudo ldconfig +``` + +## 属性 + +| 名称 | 类型 | 必选项 | 默认值 | 有效值 | 描述 | +|--------------|----------------------|-------|---------------|--------------|--------------------------------------------------------------------------------------------------------------------------------------------| +| types | array[string] or "*" | False | ["text/html"] | | 动态设置 `brotli_types` 指令。特殊值 `"*"` 用于匹配任意的 MIME 类型。 | +| min_length | integer | False | 20 | >= 1 | 动态设置 `brotli_min_length` 指令。 | +| comp_level | integer | False | 6 | [0, 11] | 动态设置 `brotli_comp_level` 指令。 | +| mode | integer | False | 0 | [0, 2] | 动态设置 `brotli decompress mode`,更多信息参考 [RFC 7932](https://tools.ietf.org/html/rfc7932)。 | +| lgwin | integer | False | 19 | [0, 10-24] | 动态设置 `brotli sliding window size`,`lgwin` 是滑动窗口大小的以 2 为底的对数,将其设置为 0 会让压缩器自行决定最佳值,更多信息请参考 [RFC 7932](https://tools.ietf.org/html/rfc7932)。 | +| lgblock | integer | False | 0 | [0, 16-24] | 动态设置 `brotli input block size`,`lgblock` 是最大输入块大小的以 2 为底的对数,将其设置为 0 会让压缩器自行决定最佳值,更多信息请参考 [RFC 7932](https://tools.ietf.org/html/rfc7932)。 | +| http_version | number | False | 1.1 | 1.1, 1.0 | 与 `gzip_http_version` 指令类似,用于识别 http 的协议版本。 | +| vary | boolean | False | false | | 与 `gzip_vary` 指令类似,用于启用或禁用 `Vary: Accept-Encoding` 响应头。 | + +## 启用插件 + +如下示例中,在指定的路由上启用 `brotli` 插件: + +```shell +curl -i http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "uri": "/", + "plugins": { + "brotli": { + } + }, + "upstream": { + "type": "roundrobin", + "nodes": { + "httpbin.org": 1 + } + } +}' +``` + +## 使用示例 + +通过上述命令启用插件后,可以通过以下方法测试插件: + +```shell +curl http://127.0.0.1:9080/ -i -H "Accept-Encoding: br" +``` + +``` +HTTP/1.1 200 OK +Content-Type: text/html; charset=utf-8 +Transfer-Encoding: chunked +Connection: keep-alive +Date: Tue, 05 Dec 2023 03:06:49 GMT +Access-Control-Allow-Origin: * +Access-Control-Allow-Credentials: true +Server: APISIX/3.6.0 +Content-Encoding: br + +Warning: Binary output can mess up your terminal. Use "--output -" to tell +Warning: curl to output it to your terminal anyway, or consider "--output +Warning: " to save to a file. +``` + +## 删除插件 + +当您需要禁用 `brotli` 插件时,可以通过以下命令删除相应的 JSON 配置,APISIX 将会自动重新加载相关配置,无需重启服务: + +```shell +curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' +{ + "uri": "/", + "upstream": { + "type": "roundrobin", + "nodes": { + "httpbin.org": 1 + } + } +}' +``` From e2ed44cb264976f9df0d78ab93977d8c01f0adc5 Mon Sep 17 00:00:00 2001 From: baiyun <337531158@qq.com> Date: Mon, 15 Jan 2024 11:18:20 +0800 Subject: [PATCH 14/20] docs: Adjust the directory of Chinese documents (#10815) --- docs/zh/latest/CHANGELOG.md | 2 +- docs/zh/latest/config.json | 38 +++++++++++++++---------------- docs/zh/latest/debug-mode.md | 2 +- docs/zh/latest/external-plugin.md | 2 +- 4 files changed, 22 insertions(+), 22 deletions(-) diff --git a/docs/zh/latest/CHANGELOG.md b/docs/zh/latest/CHANGELOG.md index 49d475dd8ded..4151f9505b91 100644 --- a/docs/zh/latest/CHANGELOG.md +++ b/docs/zh/latest/CHANGELOG.md @@ -1,5 +1,5 @@ --- -title: CHANGELOG +title: 版本发布 ---