Skip to content

Commit

Permalink
Sync to out.terraform-provider-hcs
Browse files Browse the repository at this point in the history
* tm/master:
  fix(vpc): hcs_vpc_route not support route_table_id
  fix(ecs): create ecs instance with sys volume encrypt
  feat(OBS): Change resource_obs_bucket referenced from HC to HCS customized
  feat(backend): modify docs of backend
  fix(ecs): create ecs instance with encrypt volume
  fix(vpc): acl rule fix bugs
  fix(lts): fix docs of resource_lts_transfer
  feat(dms): add docs of DMS
  • Loading branch information
chengxiangdong committed Aug 30, 2024
2 parents 892e774 + 6a39a5a commit 35eb35b
Show file tree
Hide file tree
Showing 32 changed files with 3,116 additions and 327 deletions.
141 changes: 141 additions & 0 deletions docs/data-sources/dms_kafka_flavors.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
---
subcategory: "Distributed Message Service (DMS)"
layout: "huaweicloudstack"
page_title: "HuaweiCloudStack: hcs_dms_kafka_flavors"
description: ""
---

# hcs_dms_kafka_flavors

Use this data source to get the list of available flavor details within HuaweiCloudStack.

## Example Usage

### Query the list of kafka flavors for cluster type

```hcl
data "hcs_dms_kafka_flavors" "test" {
type = "cluster"
}
```

### Query the kafka flavor details of the specified ID

```hcl
data "hcs_dms_kafka_flavors" "test" {
flavor_id = "c6.2u4g.cluster"
}
```

### Query list of kafka flavors that available in the availability zone list

```hcl
variable "az1" {}
variable "az2" {}
data "hcs_dms_kafka_flavors" "test" {
availability_zones = [
var.az1,
var.az2,
]
}
```

## Argument Reference

* `region` - (Optional, String) Specifies the region in which to obtain the dms kafka flavors.
If omitted, the provider-level region will be used.

* `flavor_id` - (Optional, String) Specifies the DMS flavor ID, e.g. **c6.2u4g.cluster**.

* `storage_spec_code` - (Optional, String) Specifies the disk IO encoding.
+ **dms.physical.storage.high.v2**: Type of the disk that uses high I/O.
+ **dms.physical.storage.ultra.v2**: Type of the disk that uses ultra-high I/O.

* `type` - (Optional, String) Specifies flavor type. The valid values are **single** and **cluster**.

* `arch_type` - (Optional, String) Specifies the type of CPU architecture, e.g. **X86**.

* `availability_zones` - (Optional, List) Specifies the list of availability zones with available resources.

## Attribute Reference

In addition to all arguments above, the following attributes are exported:

* `id` - The data source ID.

* `versions` - The supported flavor versions.

* `flavors` - The list of flavor details.
The [object](#dms_kafka_flavors) structure is documented below.

<a name="dms_kafka_flavors"></a>
The `flavors` block supports:

* `id` - The flavor ID.

* `type` - The flavor type.

* `vm_specification` - The underlying VM specification.

* `arch_types` - The list of supported CPU architectures.

* `ios` - The list of supported disk IO types.
The [object](#dms_kafka_flavor_ios) structure is documented below.

* `support_features` - The list of features supported by the current specification.
The [object](#dms_kafka_flavor_support_features) structure is documented below.

* `properties` - The properties of the current specification.
The [object](#dms_kafka_flavor_properties) structure is documented below.

<a name="dms_kafka_flavor_ios"></a>
The `ios` block supports:

* `storage_spec_code` - The disk IO encoding.

* `type` - The disk type.

* `availability_zones` - The list of availability zones with available resources.

* `unavailability_zones` - The list of unavailability zones with available resources.

<a name="dms_kafka_flavor_support_features"></a>
The `support_features` block supports:

* `name` - The function name, e.g. **connector_obs**.

* `properties` - The function property details.
The [object](#dms_kafka_flavor_support_feature_properties) structure is documented below.

<a name="dms_kafka_flavor_support_feature_properties"></a>
The `properties` block supports:

* `max_task` - The maximum number of tasks for the dump function.

* `min_task` - The minimum number of tasks for the dump function.

* `max_node` - The maximum number of nodes for the dump function.

* `min_node` - The minimum number of nodes for the dump function.

<a name="dms_kafka_flavor_properties"></a>
The `properties` block supports:

* `max_broker` - The maximum number of brokers.

* `min_broker` - The minimum number of brokers.

* `max_bandwidth_per_broker` - The maximum bandwidth per broker.

* `max_consumer_per_broker` - The maximum number of consumers per broker.

* `max_partition_per_broker` - The maximum number of partitions per broker.

* `max_tps_per_broker` - The maximum TPS per broker.

* `max_storage_per_node` - The maximum storage per node. The unit is GB.

* `min_storage_per_node` - The minimum storage per node. The unit is GB.

* `flavor_alias` - The flavor ID alias.
144 changes: 144 additions & 0 deletions docs/data-sources/dms_kafka_instances.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
---
subcategory: "Distributed Message Service (DMS)"
layout: "huaweicloudstack"
page_title: "HuaweiCloudStack: hcs_dms_kafka_instances"
description: ""
---

# hcs_dms_kafka_instances

Use this data source to query the available instances within HuaweiCloudStack DMS service.

## Example Usage

### Query all instances with the keyword in the name

```hcl
variable "keyword" {}
data "hcs_dms_kafka_instances" "test" {
name = var.keyword
fuzzy_match = true
}
```

### Query the instance with the specified name

```hcl
variable "instance_name" {}
data "hcs_dms_kafka_instances" "test" {
name = var.instance_name
}
```

## Argument Reference

* `region` - (Optional, String) The region in which to query the kafka instance list.
If omitted, the provider-level region will be used.

* `instance_id` - (Optional, String) Specifies the kafka instance ID to match exactly.

* `name` - (Optional, String) Specifies the kafka instance name for data-source queries.

* `fuzzy_match` - (Optional, Bool) Specifies whether to match the instance name fuzzily, the default is a exact
match (`flase`).

* `status` - (Optional, String) Specifies the kafka instance status for data-source queries.

* `include_failure` - (Optional, Bool) Specifies whether the query results contain instances that failed to create.

## Attribute Reference

In addition to all arguments above, the following attributes are exported:

* `id` - The data source ID.

* `instances` - The result of the query's list of kafka instances. The structure is documented below.

The `instances` block supports:

* `id` - The instance ID.

* `type` - The instance type.

* `name` - The instance name.

* `description` - The instance description.

* `availability_zones` - The list of AZ names.

* `enterprise_project_id` - The enterprise project ID to which the instance belongs.

* `product_id` - The product ID used by the instance.

* `engine_version` - The kafka engine version.

* `storage_spec_code` - The storage I/O specification.

* `storage_space` - The message storage capacity, in GB unit.

* `vpc_id` - The VPC ID to which the instance belongs.

* `network_id` - The subnet ID to which the instance belongs.

* `security_group_id` - The security group ID associated with the instance.

* `manager_user` - The username for logging in to the Kafka Manager.

* `access_user` - The access username.

* `maintain_begin` - The time at which a maintenance time window starts, the format is `HH:mm`.

* `maintain_end` - The time at which a maintenance time window ends, the format is `HH:mm`.

* `enable_public_ip` - Whether public access to the instance is enabled.

* `public_ip_ids` - The IDs of the elastic IP address (EIP).

* `security_protocol` - The protocol to use after SASL is enabled.

* `enabled_mechanisms` - The authentication mechanisms to use after SASL is enabled.

* `public_conn_addresses` - The instance public access address.
The format of each connection address is `{IP address}:{port}`.

* `retention_policy` - The action to be taken when the memory usage reaches the disk capacity threshold.

* `dumping` - Whether to dumping is enabled.

* `enable_auto_topic` - Whether to enable automatic topic creation.

* `partition_num` - The maximum number of topics in the DMS kafka instance.

* `ssl_enable` - Whether the Kafka SASL_SSL is enabled.

* `used_storage_space` - The used message storage space, in GB unit.

* `connect_address` - The IP address for instance connection.

* `port` - The port number of the instance.

* `status` - The instance status.

* `resource_spec_code` - The resource specifications identifier.

* `user_id` - The user ID who created the instance.

* `user_name` - The username who created the instance.

* `management_connect_address` - The connection address of the Kafka manager of an instance.

* `tags` - The key/value pairs to associate with the instance.

* `cross_vpc_accesses` - Indicates the Access information of cross-VPC. The structure is documented below.

The `cross_vpc_accesses` block supports:

* `listener_ip` - The listener IP address.

* `advertised_ip` - The advertised IP Address.

* `port` - The port number.

* `port_id` - The port ID associated with the address.
37 changes: 37 additions & 0 deletions docs/data-sources/dms_maintainwindow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
---
subcategory: "Distributed Message Service (DMS)"
layout: "huaweicloudstack"
page_title: "HuaweiCloudStack: hcs_dms_maintainwindow"
description: ""
---

# hcs_dms_maintainwindow

Use this data source to get the ID of an available HuaweiCloudStack dms maintenance windows.

## Example Usage

```hcl
data "hcs_dms_maintainwindow" "maintainwindow1" {
seq = 1
}
```

## Argument Reference

* `region` - (Optional, String) The region in which to obtain the dms maintenance windows. If omitted, the provider-level
region will be used.

* `seq` - (Optional, Int) Indicates the sequential number of a maintenance time window.

* `begin` - (Optional, String) Indicates the time at which a maintenance time window starts.

* `end` - (Optional, String) Indicates the time at which a maintenance time window ends.

* `default` - (Optional, Bool) Indicates whether a maintenance time window is set to the default time segment.

## Attribute Reference

In addition to all arguments above, the following attributes are exported:

* `id` - Specifies a data source ID in UUID format.
58 changes: 0 additions & 58 deletions docs/data-sources/vpc_peering_connection.md

This file was deleted.

Loading

0 comments on commit 35eb35b

Please sign in to comment.