From f39abbc10aba425f339f890b80e63ae85a9121e3 Mon Sep 17 00:00:00 2001 From: huawei Date: Mon, 12 Aug 2024 09:27:06 +0800 Subject: [PATCH 1/8] feat(dms): add docs of DMS --- docs/data-sources/dms_kafka_flavors.md | 141 +++++++++++ docs/data-sources/dms_kafka_instances.md | 144 +++++++++++ docs/data-sources/dms_maintainwindow.md | 37 +++ docs/resources/dms_kafka_consumer_group.md | 61 +++++ docs/resources/dms_kafka_instance.md | 281 +++++++++++++++++++++ docs/resources/dms_kafka_permissions.md | 80 ++++++ docs/resources/dms_kafka_topic.md | 61 +++++ docs/resources/dms_kafka_user.md | 53 ++++ 8 files changed, 858 insertions(+) create mode 100644 docs/data-sources/dms_kafka_flavors.md create mode 100644 docs/data-sources/dms_kafka_instances.md create mode 100644 docs/data-sources/dms_maintainwindow.md create mode 100644 docs/resources/dms_kafka_consumer_group.md create mode 100644 docs/resources/dms_kafka_instance.md create mode 100644 docs/resources/dms_kafka_permissions.md create mode 100644 docs/resources/dms_kafka_topic.md create mode 100644 docs/resources/dms_kafka_user.md diff --git a/docs/data-sources/dms_kafka_flavors.md b/docs/data-sources/dms_kafka_flavors.md new file mode 100644 index 00000000..9bed0cb5 --- /dev/null +++ b/docs/data-sources/dms_kafka_flavors.md @@ -0,0 +1,141 @@ +--- +subcategory: "Distributed Message Service (DMS)" +layout: "huaweicloudstack" +page_title: "HuaweiCloudStack: hcs_dms_kafka_flavors" +description: "" +--- + +# hcs_dms_kafka_flavors + +Use this data source to get the list of available flavor details within HuaweiCloudStack. + +## Example Usage + +### Query the list of kafka flavors for cluster type + +```hcl +data "hcs_dms_kafka_flavors" "test" { + type = "cluster" +} +``` + +### Query the kafka flavor details of the specified ID + +```hcl +data "hcs_dms_kafka_flavors" "test" { + flavor_id = "c6.2u4g.cluster" +} +``` + +### Query list of kafka flavors that available in the availability zone list + +```hcl +variable "az1" {} +variable "az2" {} + +data "hcs_dms_kafka_flavors" "test" { + availability_zones = [ + var.az1, + var.az2, + ] +} +``` + +## Argument Reference + +* `region` - (Optional, String) Specifies the region in which to obtain the dms kafka flavors. + If omitted, the provider-level region will be used. + +* `flavor_id` - (Optional, String) Specifies the DMS flavor ID, e.g. **c6.2u4g.cluster**. + +* `storage_spec_code` - (Optional, String) Specifies the disk IO encoding. + + **dms.physical.storage.high.v2**: Type of the disk that uses high I/O. + + **dms.physical.storage.ultra.v2**: Type of the disk that uses ultra-high I/O. + +* `type` - (Optional, String) Specifies flavor type. The valid values are **single** and **cluster**. + +* `arch_type` - (Optional, String) Specifies the type of CPU architecture, e.g. **X86**. + +* `availability_zones` - (Optional, List) Specifies the list of availability zones with available resources. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The data source ID. + +* `versions` - The supported flavor versions. + +* `flavors` - The list of flavor details. + The [object](#dms_kafka_flavors) structure is documented below. + + +The `flavors` block supports: + +* `id` - The flavor ID. + +* `type` - The flavor type. + +* `vm_specification` - The underlying VM specification. + +* `arch_types` - The list of supported CPU architectures. + +* `ios` - The list of supported disk IO types. + The [object](#dms_kafka_flavor_ios) structure is documented below. + +* `support_features` - The list of features supported by the current specification. + The [object](#dms_kafka_flavor_support_features) structure is documented below. + +* `properties` - The properties of the current specification. + The [object](#dms_kafka_flavor_properties) structure is documented below. + + +The `ios` block supports: + +* `storage_spec_code` - The disk IO encoding. + +* `type` - The disk type. + +* `availability_zones` - The list of availability zones with available resources. + +* `unavailability_zones` - The list of unavailability zones with available resources. + + +The `support_features` block supports: + +* `name` - The function name, e.g. **connector_obs**. + +* `properties` - The function property details. + The [object](#dms_kafka_flavor_support_feature_properties) structure is documented below. + + +The `properties` block supports: + +* `max_task` - The maximum number of tasks for the dump function. + +* `min_task` - The minimum number of tasks for the dump function. + +* `max_node` - The maximum number of nodes for the dump function. + +* `min_node` - The minimum number of nodes for the dump function. + + +The `properties` block supports: + +* `max_broker` - The maximum number of brokers. + +* `min_broker` - The minimum number of brokers. + +* `max_bandwidth_per_broker` - The maximum bandwidth per broker. + +* `max_consumer_per_broker` - The maximum number of consumers per broker. + +* `max_partition_per_broker` - The maximum number of partitions per broker. + +* `max_tps_per_broker` - The maximum TPS per broker. + +* `max_storage_per_node` - The maximum storage per node. The unit is GB. + +* `min_storage_per_node` - The minimum storage per node. The unit is GB. + +* `flavor_alias` - The flavor ID alias. diff --git a/docs/data-sources/dms_kafka_instances.md b/docs/data-sources/dms_kafka_instances.md new file mode 100644 index 00000000..c4b20046 --- /dev/null +++ b/docs/data-sources/dms_kafka_instances.md @@ -0,0 +1,144 @@ +--- +subcategory: "Distributed Message Service (DMS)" +layout: "huaweicloudstack" +page_title: "HuaweiCloudStack: hcs_dms_kafka_instances" +description: "" +--- + +# hcs_dms_kafka_instances + +Use this data source to query the available instances within HuaweiCloudStack DMS service. + +## Example Usage + +### Query all instances with the keyword in the name + +```hcl +variable "keyword" {} + +data "hcs_dms_kafka_instances" "test" { + name = var.keyword + fuzzy_match = true +} +``` + +### Query the instance with the specified name + +```hcl +variable "instance_name" {} + +data "hcs_dms_kafka_instances" "test" { + name = var.instance_name +} +``` + +## Argument Reference + +* `region` - (Optional, String) The region in which to query the kafka instance list. + If omitted, the provider-level region will be used. + +* `instance_id` - (Optional, String) Specifies the kafka instance ID to match exactly. + +* `name` - (Optional, String) Specifies the kafka instance name for data-source queries. + +* `fuzzy_match` - (Optional, Bool) Specifies whether to match the instance name fuzzily, the default is a exact + match (`flase`). + +* `status` - (Optional, String) Specifies the kafka instance status for data-source queries. + +* `include_failure` - (Optional, Bool) Specifies whether the query results contain instances that failed to create. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The data source ID. + +* `instances` - The result of the query's list of kafka instances. The structure is documented below. + +The `instances` block supports: + +* `id` - The instance ID. + +* `type` - The instance type. + +* `name` - The instance name. + +* `description` - The instance description. + +* `availability_zones` - The list of AZ names. + +* `enterprise_project_id` - The enterprise project ID to which the instance belongs. + +* `product_id` - The product ID used by the instance. + +* `engine_version` - The kafka engine version. + +* `storage_spec_code` - The storage I/O specification. + +* `storage_space` - The message storage capacity, in GB unit. + +* `vpc_id` - The VPC ID to which the instance belongs. + +* `network_id` - The subnet ID to which the instance belongs. + +* `security_group_id` - The security group ID associated with the instance. + +* `manager_user` - The username for logging in to the Kafka Manager. + +* `access_user` - The access username. + +* `maintain_begin` - The time at which a maintenance time window starts, the format is `HH:mm`. + +* `maintain_end` - The time at which a maintenance time window ends, the format is `HH:mm`. + +* `enable_public_ip` - Whether public access to the instance is enabled. + +* `public_ip_ids` - The IDs of the elastic IP address (EIP). + +* `security_protocol` - The protocol to use after SASL is enabled. + +* `enabled_mechanisms` - The authentication mechanisms to use after SASL is enabled. + +* `public_conn_addresses` - The instance public access address. + The format of each connection address is `{IP address}:{port}`. + +* `retention_policy` - The action to be taken when the memory usage reaches the disk capacity threshold. + +* `dumping` - Whether to dumping is enabled. + +* `enable_auto_topic` - Whether to enable automatic topic creation. + +* `partition_num` - The maximum number of topics in the DMS kafka instance. + +* `ssl_enable` - Whether the Kafka SASL_SSL is enabled. + +* `used_storage_space` - The used message storage space, in GB unit. + +* `connect_address` - The IP address for instance connection. + +* `port` - The port number of the instance. + +* `status` - The instance status. + +* `resource_spec_code` - The resource specifications identifier. + +* `user_id` - The user ID who created the instance. + +* `user_name` - The username who created the instance. + +* `management_connect_address` - The connection address of the Kafka manager of an instance. + +* `tags` - The key/value pairs to associate with the instance. + +* `cross_vpc_accesses` - Indicates the Access information of cross-VPC. The structure is documented below. + +The `cross_vpc_accesses` block supports: + +* `listener_ip` - The listener IP address. + +* `advertised_ip` - The advertised IP Address. + +* `port` - The port number. + +* `port_id` - The port ID associated with the address. diff --git a/docs/data-sources/dms_maintainwindow.md b/docs/data-sources/dms_maintainwindow.md new file mode 100644 index 00000000..84f66294 --- /dev/null +++ b/docs/data-sources/dms_maintainwindow.md @@ -0,0 +1,37 @@ +--- +subcategory: "Distributed Message Service (DMS)" +layout: "huaweicloudstack" +page_title: "HuaweiCloudStack: hcs_dms_maintainwindow" +description: "" +--- + +# hcs_dms_maintainwindow + +Use this data source to get the ID of an available HuaweiCloudStack dms maintenance windows. + +## Example Usage + +```hcl +data "hcs_dms_maintainwindow" "maintainwindow1" { + seq = 1 +} +``` + +## Argument Reference + +* `region` - (Optional, String) The region in which to obtain the dms maintenance windows. If omitted, the provider-level + region will be used. + +* `seq` - (Optional, Int) Indicates the sequential number of a maintenance time window. + +* `begin` - (Optional, String) Indicates the time at which a maintenance time window starts. + +* `end` - (Optional, String) Indicates the time at which a maintenance time window ends. + +* `default` - (Optional, Bool) Indicates whether a maintenance time window is set to the default time segment. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Specifies a data source ID in UUID format. diff --git a/docs/resources/dms_kafka_consumer_group.md b/docs/resources/dms_kafka_consumer_group.md new file mode 100644 index 00000000..286fd6eb --- /dev/null +++ b/docs/resources/dms_kafka_consumer_group.md @@ -0,0 +1,61 @@ +--- +subcategory: "Distributed Message Service (DMS)" +layout: "huaweicloudstack" +page_title: "HuaweiCloudStack: hcs_dms_kafka_consumer_group" +description: "" +--- + +# hcs_dms_kafka_consumer_group + +Manages DMS Kafka consumer group resources within HuaweiCloudStack. + +## Example Usage + +```hcl +variable "instance_id" {} +resource "hcs_dms_kafka_consumer_group" "test" { + instance_id = var.instance_id + name = "consumer_group_test" + description = "the description of the consumer group" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Optional, String, ForceNew) Specifies the region in which to create the resource. + If omitted, the provider-level region will be used. Changing this parameter will create a new resource. + +* `instance_id` - (Required, String, ForceNew) Specifies the ID of the kafka instance. + + Changing this parameter will create a new resource. + +* `name` - (Required, String, ForceNew) Specifies the name of the consumer group. + + Changing this parameter will create a new resource. + +* `description` - (Optional, String) Specifies the description of the consumer group. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The resource ID. + +* `state` - Indicates the state of the consumer group. This value can be : + **DEAD**, **EMPTY**, **PreparingRebalance**, **CompletingRebalance**, **Stable**. + +* `coordinator_id` - Indicates the coordinator id of the consumer group. + +* `lag` - Indicates the lag number of the consumer group. + +* `created_at` - Indicates the creation time of the consumer group. + +## Import + +The kafka consumer group can be imported using the kafka `instance_id` and `name` separated by a slash, e.g. + +``` +$ terraform import hcs_dms_kafka_consumer_group.test / +``` diff --git a/docs/resources/dms_kafka_instance.md b/docs/resources/dms_kafka_instance.md new file mode 100644 index 00000000..f9972a0a --- /dev/null +++ b/docs/resources/dms_kafka_instance.md @@ -0,0 +1,281 @@ +--- +subcategory: "Distributed Message Service (DMS)" +layout: "huaweicloudstack" +page_title: "HuaweiCloudStack: hcs_dms_kafka_instance" +description: "" +--- + +# hcs_dms_kafka_instance + +Manage DMS Kafka instance resources within HuaweiCloudStack. + +## Example Usage + +### Create a Kafka instance using flavor ID + +```hcl +variable "vpc_id" {} +variable "subnet_id" {} +variable "security_group_id" {} +variable "access_password" {} + +variable "availability_zones" { + default = ["your_availability_zones_a", "your_availability_zones_b", "your_availability_zones_c"] +} +variable "flavor_id" { + default = "your_flavor_id, such: c6.2u4g.cluster" +} +variable "storage_spec_code" { + default = "your_storage_spec_code, such: dms.physical.storage.ultra.v2" +} + +# Query flavor information based on flavorID and storage I/O specification. +# Make sure the flavors are available in the availability zone. +data "hcs_dms_kafka_flavors" "test" { + type = "cluster" + flavor_id = var.flavor_id + availability_zones = var.availability_zones + storage_spec_code = var.storage_spec_code +} + +resource "hcs_dms_kafka_instance" "test" { + name = "kafka_test" + vpc_id = var.vpc_id + network_id = var.subnet_id + security_group_id = var.security_group_id + + flavor_id = data.hcs_dms_kafka_flavors.test.flavor_id + storage_spec_code = data.hcs_dms_kafka_flavors.test.flavors[0].ios[0].storage_spec_code + availability_zones = var.availability_zones + engine_version = "2.7" + storage_space = 600 + broker_num = 3 + + access_user = "user" + password = var.access_password +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Optional, String, ForceNew) The region in which to create the DMS Kafka instances. If omitted, the + provider-level region will be used. Changing this creates a new instance resource. + +* `name` - (Required, String) Specifies the name of the DMS Kafka instance. An instance name starts with a letter, + consists of 4 to 64 characters, and supports only letters, digits, hyphens (-) and underscores (_). + +* `flavor_id` - (Optional, String) Specifies the Kafka [flavor ID](https://support.huawei.com/enterprise/zh/doc/EDOC1100369421?section=k001), + e.g. **c6.2u4g.cluster**. This parameter and `product_id` are alternative. + + -> It is recommended to use `flavor_id` if the region supports it. + +* `product_id` - (Optional, String) Specifies a product ID, which includes bandwidth, partition, broker and default + storage capacity. + + -> **NOTE:** Change this to change the bandwidth, partition and broker of the Kafka instances. Please note that the + broker changes may cause storage capacity changes. So, if you specify the value of `storage_space`, you need to + manually modify the value of `storage_space` after changing the `product_id`. + +* `engine_version` - (Required, String, ForceNew) Specifies the version of the Kafka engine, + such as 2.7 or other supported versions. Changing this creates a new instance resource. + +* `storage_spec_code` - (Required, String, ForceNew) Specifies the storage I/O specification. + If the instance is created with `flavor_id`, the valid values are as follows: + + **dms.physical.storage.high.v2**: Type of the disk that uses high I/O. + + **dms.physical.storage.ultra.v2**: Type of the disk that uses ultra-high I/O. + + If the instance is created with `product_id`, the valid values are as follows: + + **dms.physical.storage.high**: Type of the disk that uses high I/O. + The corresponding bandwidths are **100MB** and **300MB**. + + **dms.physical.storage.ultra**: Type of the disk that uses ultra-high I/O. + The corresponding bandwidths are **100MB**, **300MB**, **600MB** and **1,200MB**. + + Changing this creates a new instance resource. + +* `vpc_id` - (Required, String, ForceNew) Specifies the ID of a VPC. Changing this creates a new instance resource. + +* `network_id` - (Required, String, ForceNew) Specifies the ID of a subnet. Changing this creates a new instance + resource. + +* `security_group_id` - (Required, String) Specifies the ID of a security group. + +* `availability_zones` - (Required, List, ForceNew) The names of the AZ where the Kafka instances reside. + The parameter value can not be left blank or an empty array. Changing this creates a new instance resource. + + -> **NOTE:** Deploy one availability zone or at least three availability zones. Do not select two availability zones. + Deploy to more availability zones, the better the reliability and SLA coverage. + + ~> The parameter behavior of `availability_zones` has been changed from `list` to `set`. + +* `arch_type` - (Optional, String, ForceNew) Specifies the CPU architecture. Valid value is **X86**. + Changing this creates a new instance resource. + +* `manager_user` - (Optional, String, ForceNew) Specifies the username for logging in to the Kafka Manager. The username + consists of 4 to 64 characters and can contain letters, digits, hyphens (-), and underscores (_). Changing this + creates a new instance resource. + +* `manager_password` - (Optional, String, ForceNew) Specifies the password for logging in to the Kafka Manager. The + password must meet the following complexity requirements: Must be 8 to 32 characters long. Must contain at least 2 of + the following character types: lowercase letters, uppercase letters, digits, and special characters (`~!@#$%^&*()-_ + =+\\|[{}]:'",<.>/?). Changing this creates a new instance resource. + +* `storage_space` - (Optional, Int) Specifies the message storage capacity, the unit is GB. + The storage spaces corresponding to the product IDs are as follows: + + **c6.4u16g.cluster**. + + **c6.8u32g.cluster**. + + **c6.16u64g.cluster**. + + **c6.32u128g.cluster**. + + It is required when creating an instance with `flavor_id`. + +* `broker_num` - (Optional, Int) Specifies the broker numbers. + It is required when creating an instance with `flavor_id`. + +* `access_user` - (Optional, String, ForceNew) Specifies the username of SASL_SSL user. A username consists of 4 + to 64 characters and supports only letters, digits, and hyphens (-). Changing this creates a new instance resource. + +* `password` - (Optional, String) Specifies the password of SASL_SSL user. A password must meet the following + complexity requirements: Must be 8 to 32 characters long. Must contain at least 2 of the following character types: + lowercase letters, uppercase letters, digits, and special characters (`~!@#$%^&*()-_=+\\|[{}]:'",<.>/?). + +* `security_protocol` - (Optional, String, ForceNew) Specifies the protocol to use after SASL is enabled. Value options: + + **SASL_SSL**: Data is encrypted with SSL certificates for high-security transmission. + + **SASL_PLAINTEXT**: Data is transmitted in plaintext with username and password authentication. This protocol only + uses the SCRAM-SHA-512 mechanism and delivers high performance. + + Defaults to **SASL_SSL**. Changing this creates a new instance resource. + +* `enabled_mechanisms` - (Optional, List, ForceNew) Specifies the authentication mechanisms to use after SASL is + enabled. Value options: + + **PLAIN**: Simple username and password verification. + + **SCRAM-SHA-512**: User credential verification, which is more secure than **PLAIN**. + + Defaults to [**PLAIN**]. Changing this creates a new instance resource. + +* `description` - (Optional, String) Specifies the description of the DMS Kafka instance. It is a character string + containing not more than 1,024 characters. + +* `maintain_begin` - (Optional, String) Specifies the time at which a maintenance time window starts. Format: HH:mm. The + start time and end time of a maintenance time window must indicate the time segment of a supported maintenance time + window. The start time must be set to 22:00, 02:00, 06:00, 10:00, 14:00, or 18:00. Parameters `maintain_begin` + and `maintain_end` must be set in pairs. If parameter `maintain_begin` is left blank, parameter `maintain_end` is also + blank. In this case, the system automatically allocates the default start time 02:00. + +* `maintain_end` - (Optional, String) Specifies the time at which a maintenance time window ends. Format: HH:mm. The + start time and end time of a maintenance time window must indicate the time segment of a supported maintenance time + window. The end time is four hours later than the start time. For example, if the start time is 22:00, the end time is + 02:00. Parameters `maintain_begin` + and `maintain_end` must be set in pairs. If parameter `maintain_end` is left blank, parameter + `maintain_begin` is also blank. In this case, the system automatically allocates the default end time 06:00. + +* `public_ip_ids` - (Optional, List, ForceNew) Specifies the IDs of the elastic IP address (EIP) + bound to the DMS Kafka instance. Changing this creates a new instance resource. + + If the instance is created with `flavor_id`, the total number of public IPs is equal to `broker_num`. + + If the instance is created with `product_id`, the total number of public IPs must provide as follows: + + | Bandwidth | Total number of public IPs | + | ---- | ---- | + | 100MB | 3 | + | 300MB | 3 | + | 600MB | 4 | + | 1,200MB | 8 | + +* `retention_policy` - (Optional, String) Specifies the action to be taken when the memory usage reaches the disk + capacity threshold. The valid values are as follows: + + **time_base**: Automatically delete the earliest messages. + + **produce_reject**: Stop producing new messages. + +* `dumping` - (Optional, Bool, ForceNew) Specifies whether to enable message dumping(smart connect). + Changing this creates a new instance resource. + +* `enable_auto_topic` - (Optional, Bool) Specifies whether to enable automatic topic creation. If automatic + topic creation is enabled, a topic will be automatically created with 3 partitions and 3 replicas when a message is + produced to or consumed from a topic that does not exist. + The default value is false. + +* `tags` - (Optional, Map) The key/value pairs to associate with the DMS Kafka instance. + +* `cross_vpc_accesses` - (Optional, List) Specifies the cross-VPC access information. + The [object](#dms_cross_vpc_accesses) structure is documented below. + + +The `cross_vpc_accesses` block supports: + +* `advertised_ip` - (Optional, String) The advertised IP Address or domain name. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Specifies a resource ID in UUID format. + +* `engine` - Indicates the message engine. + +* `partition_num` - Indicates the number of partitions in Kafka instance. + +* `used_storage_space` - Indicates the used message storage space. Unit: GB + +* `port` - Indicates the port number of the DMS Kafka instance. + +* `status` - Indicates the status of the DMS Kafka instance. + +* `enable_public_ip` - Indicates whether public access to the DMS Kafka instance is enabled. + +* `resource_spec_code` - Indicates a resource specifications identifier. + +* `type` - Indicates the DMS Kafka instance type. + +* `user_id` - Indicates the ID of the user who created the DMS Kafka instance + +* `user_name` - Indicates the name of the user who created the DMS Kafka instance + +* `connect_address` - Indicates the IP address of the DMS Kafka instance. + +* `management_connect_address` - Indicates the Kafka Manager connection address of a Kafka instance. + +* `cross_vpc_accesses` - Indicates the Access information of cross-VPC. The structure is documented below. + +* `charging_mode` - Indicates the charging mode of the instance. + +The `cross_vpc_accesses` block supports: + +* `listener_ip` - The listener IP address. +* `port` - The port number. +* `port_id` - The port ID associated with the address. + +## Timeouts + +This resource provides the following timeouts configuration options: + +* `create` - Default is 50 minutes. +* `update` - Default is 50 minutes. +* `delete` - Default is 15 minutes. + +## Import + +DMS Kafka instance can be imported using the instance id, e.g. + +``` + $ terraform import hcs_dms_kafka_instance.instance_1 8d3c7938-dc47-4937-a30f-c80de381c5e3 +``` + +Note that the imported state may not be identical to your resource definition, due to some attributes missing from the +API response, security or some other reason. The missing attributes include: +`password`, `manager_password`, `public_ip_ids`, `security_protocol`, `enabled_mechanisms` and `arch_type`. +It is generally recommended running `terraform plan` after importing +a DMS Kafka instance. You can then decide if changes should be applied to the instance, or the resource definition +should be updated to align with the instance. Also you can ignore changes as below. + +```hcl +resource "hcs_dms_kafka_instance" "instance_1" { + ... + + lifecycle { + ignore_changes = [ + password, manager_password, + ] + } +} +``` diff --git a/docs/resources/dms_kafka_permissions.md b/docs/resources/dms_kafka_permissions.md new file mode 100644 index 00000000..e562e8a8 --- /dev/null +++ b/docs/resources/dms_kafka_permissions.md @@ -0,0 +1,80 @@ +--- +subcategory: "Distributed Message Service (DMS)" +layout: "huaweicloudstack" +page_title: "HuaweiCloudStack: hcs_dms_kafka_permissions" +description: "" +--- + +# hcs_dms_kafka_permissions + +Use the resource to grant user permissions of a kafka topic within HuaweiCloudStack. + +## Example Usage + +```hcl +variable "kafka_instance_id" {} +variable "kafka_topic_name" {} +variable "user_1" {} +variable "user_2" {} + +resource "hcs_dms_kafka_permissions" "test" { + instance_id = var.kafka_instance_id + topic_name = var.kafka_topic_name + policies { + user_name = var.user_1 + access_policy = "all" + } + + policies { + user_name = var.user_2 + access_policy = "pub" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Optional, String, ForceNew) The region in which to create the DMS kafka permissions resource. If omitted, the + provider-level region will be used. Changing this creates a new resource. + +* `instance_id` - (Required, String, ForceNew) Specifies the ID of the DMS kafka instance to which the permissions belongs. + Changing this creates a new resource. + +* `topic_name` - (Required, String, ForceNew) Specifies the name of the topic to which the permissions belongs. + Changing this creates a new resource. + +* `policies` - (Required, List) Specifies the permissions policies. The [object](#dms_kafka_policies) structure is + documented below. + + +The `policies` block supports: + +* `user_name` - (Required, String) Specifies the username. + +* `access_policy` - (Required, String) Specifies the permissions type. The value can be: + + **all**: publish and subscribe permissions. + + **pub**: publish permissions. + + **sub**: subscribe permissions. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The resource ID which is formatted `/`. + +## Timeouts + +This resource provides the following timeouts configuration options: + +* `create` - Default is 5 minutes. +* `delete` - Default is 5 minutes. + +## Import + +DMS kafka permissions can be imported using the kafka instance ID and topic name separated by a slash, e.g.: + +``` +terraform import hcs_dms_kafka_permissions.permissions c8057fe5-23a8-46ef-ad83-c0055b4e0c5c/topic_1 +``` diff --git a/docs/resources/dms_kafka_topic.md b/docs/resources/dms_kafka_topic.md new file mode 100644 index 00000000..76cf2b55 --- /dev/null +++ b/docs/resources/dms_kafka_topic.md @@ -0,0 +1,61 @@ +--- +subcategory: "Distributed Message Service (DMS)" +layout: "huaweicloudstack" +page_title: "HuaweiCloudStack: hcs_dms_kafka_topic" +description: "" +--- + +# hcs_dms_kafka_topic + +Manages a DMS kafka topic resource within HuaweiCloudStack. + +## Example Usage + +```hcl +variable "kafka_instance_id" {} + +resource "hcs_dms_kafka_topic" "topic" { + instance_id = var.kafka_instance_id + name = "topic_1" + partitions = 20 +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Optional, String, ForceNew) The region in which to create the DMS kafka topic resource. If omitted, the + provider-level region will be used. Changing this creates a new resource. + +* `instance_id` - (Required, String, ForceNew) Specifies the ID of the DMS kafka instance to which the topic belongs. + Changing this creates a new resource. + +* `name` - (Required, String, ForceNew) Specifies the name of the topic. The name starts with a letter, consists of 4 to + 64 characters, and supports only letters, digits, hyphens (-) and underscores (_). Changing this creates a new + resource. + +* `partitions` - (Required, Int) Specifies the partition number. The value ranges from 1 to 200. + +* `replicas` - (Optional, Int, ForceNew) Specifies the replica number. The value ranges from 1 to 3 and defaults to 3. + Changing this creates a new resource. + +* `aging_time` - (Optional, Int) Specifies the aging time in hours. The value ranges from 1 to 720 and defaults to 72. + +* `sync_replication` - (Optional, Bool) Whether or not to enable synchronous replication. + +* `sync_flushing` - (Optional, Bool) Whether or not to enable synchronous flushing. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The resource ID which equals to the topic name. + +## Import + +DMS kafka topics can be imported using the kafka instance ID and topic name separated by a slash, e.g.: + +```sh +terraform import hcs_dms_kafka_topic.topic c8057fe5-23a8-46ef-ad83-c0055b4e0c5c/topic_1 +``` diff --git a/docs/resources/dms_kafka_user.md b/docs/resources/dms_kafka_user.md new file mode 100644 index 00000000..9bc6f11c --- /dev/null +++ b/docs/resources/dms_kafka_user.md @@ -0,0 +1,53 @@ +--- +subcategory: "Distributed Message Service (DMS)" +layout: "huaweicloudstack" +page_title: "HuaweiCloudStack: hcs_dms_kafka_user" +description: "" +--- + +# hcs_dms_kafka_user + +Manages a DMS kafka user resource within HuaweiCloudStack. + +## Example Usage + +```hcl +variable "kafka_instance_id" {} +variable "user_password" {} + +resource "hcs_dms_kafka_user" "user" { + instance_id = var.kafka_instance_id + name = "user_1" + password = var.user_password +} +``` + +## Argument Reference + +The following arguments are supported: + +* `region` - (Optional, String, ForceNew) The region in which to create the DMS kafka user resource. If omitted, the + provider-level region will be used. Changing this creates a new resource. + +* `instance_id` - (Required, String, ForceNew) Specifies the ID of the DMS kafka instance to which the user belongs. + Changing this creates a new resource. + +* `name` - (Required, String, ForceNew) Specifies the name of the user. Changing this creates a new resource. + +* `password` - (Required, String) Specifies the password of the user. The parameter must be 8 to 32 characters + long and contain only letters(case-sensitive), digits, and special characters(`~!@#$%^&*()-_=+|[{}]:'",<.>/?). + The value must be different from name. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The resource ID which is formatted `/`. + +## Import + +DMS kafka users can be imported using the kafka instance ID and user name separated by a slash, e.g. + +``` +terraform import hcs_dms_kafka_user.user c8057fe5-23a8-46ef-ad83-c0055b4e0c5c/user_1 +``` From db6ed4c9718087a149822916e06dce7ff285a403 Mon Sep 17 00:00:00 2001 From: huawei Date: Mon, 12 Aug 2024 09:30:15 +0800 Subject: [PATCH 2/8] fix(lts): fix docs of resource_lts_transfer --- docs/resources/lts_transfer.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/resources/lts_transfer.md b/docs/resources/lts_transfer.md index 98d779fd..c2c935cf 100644 --- a/docs/resources/lts_transfer.md +++ b/docs/resources/lts_transfer.md @@ -16,7 +16,7 @@ Manages an LTS transfer task resource within HuaweiCloudStack. ```hcl variable "lts_group_id" {} variable "lts_stream_id" {} -variable "obs_buket" {} +variable "obs_bucket" {} resource "hcs_lts_transfer" "test" { log_group_id = var.lts_group_id @@ -34,7 +34,7 @@ resource "hcs_lts_transfer" "test" { log_transfer_detail { obs_period = 3 obs_period_unit = "hour" - obs_bucket_name = var.obs_buket + obs_bucket_name = var.obs_bucket obs_dir_prefix_name = "dir_prefix_" obs_prefix_name = "prefix_" obs_time_zone = "UTC" @@ -107,7 +107,7 @@ resource "hcs_lts_transfer" "test" { ```hcl variable "lts_group_id" {} variable "lts_stream_id" {} -variable "obs_buket" {} +variable "obs_bucket" {} variable "agency_domain_id" {} variable "agency_domain_name" {} variable "agency_name" {} @@ -129,7 +129,7 @@ resource "hcs_lts_transfer" "obs_agency" { log_transfer_detail { obs_period = 3 obs_period_unit = "hour" - obs_bucket_name = var.obs_buket + obs_bucket_name = var.obs_bucket obs_dir_prefix_name = "dir_prefix_" obs_prefix_name = "prefix_" obs_time_zone = "UTC" From 5508cea649508d4a59639d15b57df3a8d1de8f5c Mon Sep 17 00:00:00 2001 From: huawei Date: Wed, 28 Aug 2024 16:32:57 +0800 Subject: [PATCH 3/8] fix(vpc): acl rule fix bugs --- huaweicloudstack/resource_hcs_network_acl_rule.go | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/huaweicloudstack/resource_hcs_network_acl_rule.go b/huaweicloudstack/resource_hcs_network_acl_rule.go index 231ca49e..8a4db880 100644 --- a/huaweicloudstack/resource_hcs_network_acl_rule.go +++ b/huaweicloudstack/resource_hcs_network_acl_rule.go @@ -203,7 +203,11 @@ func resourceNetworkACLRuleUpdate(d *schema.ResourceData, meta interface{}) erro } if d.HasChange("protocol") { protocol := d.Get("protocol").(string) - updateOpts.Protocol = &protocol + if protocol == "any" { + updateOpts.Protocol = nil + } else { + updateOpts.Protocol = &protocol + } } if d.HasChange("action") { action := d.Get("action").(string) From b70c0174ff471f66c79a257d29a94c193914ae3d Mon Sep 17 00:00:00 2001 From: zhaopanju Date: Wed, 28 Aug 2024 15:35:53 +0800 Subject: [PATCH 4/8] fix(ecs): create ecs instance with encrypt volume --- docs/resources/ecs_compute_instance.md | 58 ++++++++++++++++++- examples/as/alarm_policy/main.tf | 50 ---------------- examples/as/instance_attach/main.tf | 9 +++ examples/as/life_hook/main.tf | 11 +++- examples/as/notification/main.tf | 17 +++--- examples/ecs/encrypt-volume/README.md | 3 + examples/ecs/encrypt-volume/main.tf | 47 +++++++++++++++ examples/ecs/encrypt-volume/variables.tf | 49 ++++++++++++++++ .../openstack/ecs/v1/cloudservers/requests.go | 12 +++- .../openstack/ecs/v1/flavors/results.go | 2 + .../openstack/evs/v2/cloudvolumes/results.go | 6 ++ .../ecs/data_source_hcs_compute_flavors.go | 13 +++-- .../ecs/resource_hcs_compute_instance.go | 30 ++++++++-- 13 files changed, 235 insertions(+), 72 deletions(-) create mode 100644 examples/ecs/encrypt-volume/README.md create mode 100644 examples/ecs/encrypt-volume/main.tf create mode 100644 examples/ecs/encrypt-volume/variables.tf diff --git a/docs/resources/ecs_compute_instance.md b/docs/resources/ecs_compute_instance.md index a677a6af..056b5237 100644 --- a/docs/resources/ecs_compute_instance.md +++ b/docs/resources/ecs_compute_instance.md @@ -341,6 +341,57 @@ resource "hcs_ecs_compute_instance" "ecs-userdata" { delete_eip_on_termination = true } ``` +### Instance with Encrypt Data Volumes +``` +data "hcs_availability_zones" "test" { +} + +data "hcs_ecs_compute_flavors" "flavors" { + availability_zone = data.hcs_availability_zones.test.names[0] + cpu_core_count = 2 + memory_size = 4 +} + +data "hcs_vpc_subnets" "test" { + name = "subnet-32a8" +} + +data "hcs_ims_images" "test" { + name = "mini_image" +} + +data "hcs_networking_secgroups" "test" { + name = "default" +} + +resource "hcs_ecs_compute_instance" "ecs-userdata" { + name = "ecs-userdata" + description = "terraform test" + image_id = data.hcs_ims_images.test.images[0].id + flavor_id = data.hcs_ecs_compute_flavors.flavors.ids[0] + ext_boot_type = data.hcs_ecs_compute_flavors.test.flavors[0].ext_boot_type + security_group_ids = [data.hcs_networking_secgroups.test.security_groups[0].id] + availability_zone = data.hcs_availability_zones.test.names[0] + user_data = "xxxxxxxxxxxxxxxxxxxxxxx" + + network { + uuid = data.hcs_vpc_subnets.test.subnets[0].id + source_dest_check = false + } + + system_disk_type = "business_type_01" + system_disk_size = 10 + + data_disks { + kms_key_id = "ce488d6a-6090-4f7f-a95b-4faf3ce0bad0" + encrypt_cipher = "AES256-XTS" + type = "business_type_01" + size = "10" + } + delete_disks_on_termination = true + delete_eip_on_termination = true +} +``` ## Argument Reference @@ -354,6 +405,9 @@ The following arguments are supported: * `flavor_id` - (Required, String) Specifies the flavor ID of the instance to be created. +* `ext_boot_type` - (Required, String) Specifies the image type in flavor. which must be one of available image types, + contains of *LocalDisk*, *Volume* + * `image_id` - (Optional, String, ForceNew) Required if `image_name` is empty. Specifies the image ID of the desired image for the instance. Changing this creates a new instance. @@ -450,7 +504,9 @@ The `data_disks` block supports: * `snapshot_id` - (Optional, String, ForceNew) Specifies the snapshot id. Changing this creates a new instance. * `kms_key_id` - (Optional, String, ForceNew) Specifies the ID of a KMS key. This is used to encrypt the disk. - Changing this creates a new instance. + +* `encrypt_cipher` - (Optional, String, ForceNew) Specifies the encrypt cipher of KMS. This value must be set to AES256-XTS or SM4-XTS when SM series cryptographic algorithms are used. When other cryptographic algorithms are used, this value must be AES256-XTS. + This param must exist if *kms_key_id* exists The `bandwidth` block supports: diff --git a/examples/as/alarm_policy/main.tf b/examples/as/alarm_policy/main.tf index 2459a324..f8b4ffdf 100644 --- a/examples/as/alarm_policy/main.tf +++ b/examples/as/alarm_policy/main.tf @@ -64,56 +64,6 @@ resource "hcs_as_group" "my_as_group" { } } -resource "hcs_ces_alarmrule" "scaling_up_rule" { - alarm_name = "scaling_up_rule" - - metric { - namespace = "SYS.AS" - metric_name = "cpu_util" - dimensions { - name = "AutoScalingGroup" - value = hcs_as_group.my_as_group.id - } - } - condition { - period = 300 - filter = "average" - comparison_operator = ">=" - value = 80 - unit = "%" - count = 1 - } - alarm_actions { - type = "autoscaling" - notification_list = [] - } -} - -resource "hcs_ces_alarmrule" "scaling_down_rule" { - alarm_name = "scaling_down_rule" - - metric { - namespace = "SYS.AS" - metric_name = "cpu_util" - dimensions { - name = "AutoScalingGroup" - value = hcs_as_group.my_as_group.id - } - } - condition { - period = 300 - filter = "average" - comparison_operator = "<=" - value = 20 - unit = "%" - count = 1 - } - alarm_actions { - type = "autoscaling" - notification_list = [] - } -} - resource "hcs_as_policy" "scaling_up_policy" { scaling_policy_name = "scaling_up_policy" scaling_policy_type = "ALARM" diff --git a/examples/as/instance_attach/main.tf b/examples/as/instance_attach/main.tf index f7dd70be..2d7d84f6 100644 --- a/examples/as/instance_attach/main.tf +++ b/examples/as/instance_attach/main.tf @@ -3,6 +3,15 @@ variable "ecs_id" { default = "1833f1d0-9250-4054-bc30-8f6bd7469b95" } +variable "group_name" { + type = string + default = "as-group-fb25" +} + +data "hcs_as_groups" "groups" { + name = var.group_name +} + resource "hcs_as_instance_attach" "as_instance1" { scaling_group_id = data.hcs_as_groups.groups.groups[0].scaling_group_id instance_id = var.ecs_id diff --git a/examples/as/life_hook/main.tf b/examples/as/life_hook/main.tf index 3bdfdcd4..754949ff 100644 --- a/examples/as/life_hook/main.tf +++ b/examples/as/life_hook/main.tf @@ -3,11 +3,20 @@ variable "hook_name" { default = "as-policy-77e3" } +variable "smn_topic_urn" {} + data "hcs_smn_topics" "tops" { name = "topic_1" } -variable "smn_topic_urn" {} +variable "group_name" { + type = string + default = "as-group-fb25" +} + +data "hcs_as_groups" "groups" { + name = var.group_name +} resource "hcs_as_lifecycle_hook" "lifecycle_hook1" { scaling_group_id = data.hcs_as_groups.groups.groups[0].scaling_group_id diff --git a/examples/as/notification/main.tf b/examples/as/notification/main.tf index 3bdfdcd4..e8ca9149 100644 --- a/examples/as/notification/main.tf +++ b/examples/as/notification/main.tf @@ -1,19 +1,16 @@ -variable "hook_name" { +variable "group_name" { type = string - default = "as-policy-77e3" + default = "as-group-fb25" } -data "hcs_smn_topics" "tops" { - name = "topic_1" +data "hcs_as_groups" "groups" { + name = var.group_name } variable "smn_topic_urn" {} -resource "hcs_as_lifecycle_hook" "lifecycle_hook1" { +resource "hcs_as_notification" "as_notification" { scaling_group_id = data.hcs_as_groups.groups.groups[0].scaling_group_id - name = var.hook_name - type = "ADD" - default_result = "ABANDON" - notification_topic_urn = var.smn_topic_urn - notification_message = "This is a test message" + topic_urn = var.smn_topic_urn + events = [ "SCALING_UP", "SCALING_UP_FAIL", "SCALING_DOWN", "SCALING_DOWN_FAIL", "SCALING_GROUP_ABNORMAL" ] } \ No newline at end of file diff --git a/examples/ecs/encrypt-volume/README.md b/examples/ecs/encrypt-volume/README.md new file mode 100644 index 00000000..3eb8b704 --- /dev/null +++ b/examples/ecs/encrypt-volume/README.md @@ -0,0 +1,3 @@ +# ECS Instance With Encrypt Volume + +This example provides an ECS instance for encrypting disks. diff --git a/examples/ecs/encrypt-volume/main.tf b/examples/ecs/encrypt-volume/main.tf new file mode 100644 index 00000000..87721121 --- /dev/null +++ b/examples/ecs/encrypt-volume/main.tf @@ -0,0 +1,47 @@ +data "hcs_availability_zones" "test" { +} + +data "hcs_ecs_compute_flavors" "test" { + availability_zone = data.hcs_availability_zones.test.names[0] + cpu_core_count = 2 + memory_size = 4 +} + +data "hcs_vpc_subnets" "test" { + name = var.subnet_name +} + +data "hcs_ims_images" "test" { + name = var.image_name +} + +data "hcs_networking_secgroups" "test" { + name = var.secgroup_name +} + +resource "hcs_ecs_compute_instance" "ecs-test" { + name = join("-", [var.ecs_name, "-encrypt-volume"]) + description = var.ecs_description + image_id = data.hcs_ims_images.test.images[0].id + flavor_id = data.hcs_ecs_compute_flavors.test.ids[0] + ext_boot_type = data.hcs_ecs_compute_flavors.test.flavors[0].ext_boot_type + security_group_ids = [data.hcs_networking_secgroups.test.security_groups[0].id] + availability_zone = data.hcs_availability_zones.test.names[0] + enterprise_project_id = var.enterprise_project_id + + network { + uuid = data.hcs_vpc_subnets.test.subnets[0].id + source_dest_check = false + } + system_disk_type = var.disk_type + system_disk_size = var.system_disk_size + + data_disks { + kms_key_id = var.kms_key_id + encrypt_cipher = var.encrypt_cipher + type = var.disk_type + size = var.data_disk_size + } + delete_disks_on_termination = true + delete_eip_on_termination = true +} \ No newline at end of file diff --git a/examples/ecs/encrypt-volume/variables.tf b/examples/ecs/encrypt-volume/variables.tf new file mode 100644 index 00000000..9d484a6f --- /dev/null +++ b/examples/ecs/encrypt-volume/variables.tf @@ -0,0 +1,49 @@ +variable "vpc_name" { + default = "vpc-default" +} + +variable "subnet_name" { + default = "subnet-32a8" +} + +variable "secgroup_name" { + default = "default" +} + +variable "image_name" { + default = "cirros-arm" +} + +variable "ecs_name" { + default = "ecs-server" +} + +variable "ecs_description" { + default = "" +} + +variable "disk_type" { + default = "business_type_01" +} + +variable "system_disk_size" { + type = number + default = 10 +} + +variable "data_disk_size" { + type = number + default = 10 +} + +variable "enterprise_project_id" { + default = "default" +} + +variable "kms_key_id" { + type = string +} + +variable "encrypt_cipher" { + type = string +} \ No newline at end of file diff --git a/huaweicloudstack/sdk/huaweicloud/openstack/ecs/v1/cloudservers/requests.go b/huaweicloudstack/sdk/huaweicloud/openstack/ecs/v1/cloudservers/requests.go index a9049866..24fb50ed 100644 --- a/huaweicloudstack/sdk/huaweicloud/openstack/ecs/v1/cloudservers/requests.go +++ b/huaweicloudstack/sdk/huaweicloud/openstack/ecs/v1/cloudservers/requests.go @@ -147,6 +147,8 @@ type DataVolume struct { Extendparam *VolumeExtendParam `json:"extendparam,omitempty"` Metadata *VolumeMetadata `json:"metadata,omitempty"` + + EncryptionInfo *VolumeEncryptInfo `json:"encryption_info,omitempty"` } type VolumeExtendParam struct { @@ -154,8 +156,12 @@ type VolumeExtendParam struct { } type VolumeMetadata struct { - SystemEncrypted string `json:"__system__encrypted,omitempty"` - SystemCmkid string `json:"__system__cmkid,omitempty"` + SystemCmkid string `json:"__system__cmkid,omitempty"` +} + +type VolumeEncryptInfo struct { + CmkId string `json:"cmk_id,omitempty"` + Cipher string `json:"cipher,omitempty"` } type ServerExtendParam struct { @@ -181,6 +187,8 @@ type ServerExtendParam struct { SpotDurationCount int `json:"spot_duration_count,omitempty"` // Specifies the spot ECS interruption policy, which can only be set to "immediate" currently InterruptionPolicy string `json:"interruption_policy,omitempty"` + + Image_Boot bool `json:"image_boot,omitempty"` } type MetaData struct { diff --git a/huaweicloudstack/sdk/huaweicloud/openstack/ecs/v1/flavors/results.go b/huaweicloudstack/sdk/huaweicloud/openstack/ecs/v1/flavors/results.go index b15bb3e8..b49ea123 100644 --- a/huaweicloudstack/sdk/huaweicloud/openstack/ecs/v1/flavors/results.go +++ b/huaweicloudstack/sdk/huaweicloud/openstack/ecs/v1/flavors/results.go @@ -88,6 +88,8 @@ type OsExtraSpecs struct { // Indicates the status of the flavor in az level. OperationAz string `json:"cond:operation:az"` + + ExtBootType string `json:"huawei:extBootType"` } // FlavorsPage is the page returned by a pager when traversing over a diff --git a/huaweicloudstack/sdk/huaweicloud/openstack/evs/v2/cloudvolumes/results.go b/huaweicloudstack/sdk/huaweicloud/openstack/evs/v2/cloudvolumes/results.go index ee75c5d2..285d28ce 100644 --- a/huaweicloudstack/sdk/huaweicloud/openstack/evs/v2/cloudvolumes/results.go +++ b/huaweicloudstack/sdk/huaweicloud/openstack/evs/v2/cloudvolumes/results.go @@ -59,6 +59,12 @@ type VolumeMetadata struct { AttachedMode string `json:"attached_mode"` } +type EncryptInfo struct { + CmkId string `json:"cmk_id"` + + Cipher string `json:"cipher"` +} + // Link is an object that represents a link to which the disk belongs. type Link struct { // Specifies the corresponding shortcut link. diff --git a/huaweicloudstack/services/ecs/data_source_hcs_compute_flavors.go b/huaweicloudstack/services/ecs/data_source_hcs_compute_flavors.go index d272e865..bb0c76b6 100644 --- a/huaweicloudstack/services/ecs/data_source_hcs_compute_flavors.go +++ b/huaweicloudstack/services/ecs/data_source_hcs_compute_flavors.go @@ -67,6 +67,10 @@ func FlavorsRefSchema() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "ext_boot_type": { + Type: schema.TypeString, + Computed: true, + }, }, } } @@ -128,10 +132,11 @@ func dataSourceEcsFlavorsRead(_ context.Context, d *schema.ResourceData, meta in func flattenFlavor(flavor *flavors.Flavor) map[string]interface{} { res := map[string]interface{}{ - "id": flavor.ID, - "name": flavor.Name, - "ram": flavor.Ram, - "vcpus": flavor.Vcpus, + "id": flavor.ID, + "name": flavor.Name, + "ram": flavor.Ram, + "vcpus": flavor.Vcpus, + "ext_boot_type": flavor.OsExtraSpecs.ExtBootType, } return res } diff --git a/huaweicloudstack/services/ecs/resource_hcs_compute_instance.go b/huaweicloudstack/services/ecs/resource_hcs_compute_instance.go index 92ae002d..1be0dfec 100644 --- a/huaweicloudstack/services/ecs/resource_hcs_compute_instance.go +++ b/huaweicloudstack/services/ecs/resource_hcs_compute_instance.go @@ -88,6 +88,14 @@ func ResourceComputeInstance() *schema.Resource { DefaultFunc: schema.EnvDefaultFunc("HW_FLAVOR_ID", nil), Description: "schema: Required", }, + "ext_boot_type": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice([]string{ + "LocalDisk", "Volume", + }, false), + }, "flavor_name": { Type: schema.TypeString, Optional: true, @@ -219,6 +227,11 @@ func ResourceComputeInstance() *schema.Resource { Optional: true, ForceNew: true, }, + "encrypt_cipher": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, }, }, }, @@ -366,6 +379,10 @@ func ResourceComputeInstance() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "encrypt_cipher": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, @@ -474,6 +491,11 @@ func resourceComputeInstanceCreate(ctx context.Context, d *schema.ResourceData, if epsID != "" { extendParam.EnterpriseProjectId = epsID } + + extBootType := d.Get("ext_boot_type").(string) + if extBootType == "LocalDisk" { + extendParam.Image_Boot = true + } if extendParam != (cloudservers.ServerExtendParam{}) { createOpts.ExtendParam = &extendParam } @@ -1346,11 +1368,11 @@ func resourceInstanceDataVolumes(d *schema.ResourceData) []cloudservers.DataVolu } if vol["kms_key_id"] != "" { - matadata := cloudservers.VolumeMetadata{ - SystemEncrypted: "1", - SystemCmkid: vol["kms_key_id"].(string), + encryptioninfo := cloudservers.VolumeEncryptInfo{ + CmkId: vol["kms_key_id"].(string), + Cipher: vol["encrypt_cipher"].(string), } - volRequest.Metadata = &matadata + volRequest.EncryptionInfo = &encryptioninfo } volRequests = append(volRequests, volRequest) From 9dffa0dada8d87b8760a80fffd50c800ff18a6e8 Mon Sep 17 00:00:00 2001 From: huawei Date: Tue, 27 Aug 2024 16:28:31 +0800 Subject: [PATCH 5/8] feat(backend): modify docs of backend --- docs/guides/remote-state-backend.md | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/docs/guides/remote-state-backend.md b/docs/guides/remote-state-backend.md index aa454644..2216273b 100644 --- a/docs/guides/remote-state-backend.md +++ b/docs/guides/remote-state-backend.md @@ -46,18 +46,23 @@ export AWS_SECRET_ACCESS_KEY="your secretkey" ``` The backend configuration as follows: +Assuming region is **myregion**, cloud is **mycloud.com** ```hcl terraform { backend "s3" { - bucket = "terraformbucket" - key = "terraform.tfstate" - region = "cn-north-1" - endpoint = "https://obs.cn-north-1.myhuaweicloud.com" + bucket = "terraformbucket" + key = "terraform.tfstate" + region = "myregion" + + endpoints = { + s3 = "https://obsv3.myregion.mycloud.com" + } skip_region_validation = true skip_credentials_validation = true skip_metadata_api_check = true + skip_requesting_account_id = true } } ``` @@ -81,9 +86,10 @@ The following arguments are supported: * `region` - (Required) Specifies the region where the bucket is located. This can also be sourced from the *AWS_DEFAULT_REGION* and *AWS_REGION* environment variables. -* `endpoint` - (Required) Specifies the endpoint for HuaweiCloudStack OBS. - The value is `https://obs.{{region}}.myhuaweicloud.com`. - This can also be sourced from the *AWS_S3_ENDPOINT* environment variable. +* `endpoints.s3` - (Required) Specifies the endpoint for HuaweiCloudStack OBS. + The value is `https://obsv3.{{region}}.{{cloud}}`. + This can also be sourced from the environment variable *AWS_ENDPOINT_URL_S3* or the deprecated environment variable + *AWS_S3_ENDPOINT*. * `skip_credentials_validation` - (Required) Skip credentials validation via the STS API. It's mandatory for HuaweiCloudStack. @@ -92,6 +98,8 @@ The following arguments are supported: * `skip_metadata_api_check` - (Required) Skip usage of EC2 Metadata API. It's mandatory for HuaweiCloudStack. +* `skip_requesting_account_id` - (Required) Skip requesting the account ID. It's mandatory for HuaweiCloudStack. + * `workspace_key_prefix` - (Optional) Specifies the prefix applied to the state path inside the bucket. This parameter is only valid when using a non-default [workspace](https://www.terraform.io/docs/language/state/workspaces.html). When using a non-default workspace, the state path will be `/workspace_key_prefix/workspace_name/key_name`. From 274a572c5959732a11276dc95312b29acc34d4b8 Mon Sep 17 00:00:00 2001 From: huawei Date: Mon, 26 Aug 2024 17:48:48 +0800 Subject: [PATCH 6/8] feat(OBS): Change resource_obs_bucket referenced from HC to HCS customized --- docs/resources/obs_bucket.md | 44 + huaweicloudstack/common/eps_management.go | 33 + huaweicloudstack/provider.go | 3 +- .../services/obs/resource_hcs_obs_bucket.go | 1801 +++++++++++++++++ 4 files changed, 1880 insertions(+), 1 deletion(-) create mode 100644 huaweicloudstack/common/eps_management.go create mode 100644 huaweicloudstack/services/obs/resource_hcs_obs_bucket.go diff --git a/docs/resources/obs_bucket.md b/docs/resources/obs_bucket.md index e9fc92af..c13d4a97 100644 --- a/docs/resources/obs_bucket.md +++ b/docs/resources/obs_bucket.md @@ -112,6 +112,20 @@ resource "hcs_obs_bucket" "bucket" { } ``` +### Create Fusion Bucket + +```hcl +resource "hcs_obs_bucket" "bucket" { + bucket = "my-bucket" + acl = "private" + storage_class = "STANDARD" + + bucket_redundancy = "FUSION" + fusion_allow_upgrade = true + fusion_allow_alternative = true +} +``` + ## Argument Reference The following arguments are supported: @@ -193,6 +207,36 @@ The following arguments are supported: -> When creating or updating the OBS bucket user domain names, the original user domain names will be overwritten. +* `bucket_redundancy` - (Optional, String) Specify the type of OBS bucket. It is **Required** when create fusion bucket. + If change a **CLASSIC** bucket to **FUSION** bucket, `fusion_allow_upgrade` must be **true**. + Valid value are as follows. + + **CLASSIC**. Create a CLASSIC bucket. + + **FUSION**. Create a FUSION bucket. + + The API only support change a **CLASSIC** bucket to **FUSION** bucket. On the contrary, if change a **FUSION** bucket + to **CLASSIC** bucket, it's not supported and will not return any error message. Default to **CLASSIC**. + +* `fusion_allow_upgrade` - (Optional, Bool) Specify when the bucket already existing, whether upgrade it to + fusion bucket. Valid value are as follows. + + **true**. Allow upgrade existing bucket to fusion bucket. + + **false**. Deny upgrade existing bucket to fusion bucket. + + Default to **false**. + +-> **NOTE:** The constraint of **fusion_allow_upgrade** are as follows: ++ It is valid only when `bucket_redundancy` is **FUSION**. ++ When the bucket has already exists, but the requester and the bucket owner are not the same user (that is, + they have different domain_id values), the API returns error code `409 BucketAlreadyExists`. ++ A bucket with cross-region replication configured cannot be upgraded to a converged bucket. + +* `fusion_allow_alternative` - (Optional, Bool) If the environment does not support fusion bucket, but you send a + request for creating a fusion bucket, whether allow the system automatically create a bucket of the classic bucket. + Valid value are as follows. + + **true**. Allow system to create CLASSIC bucket. + + **false**. Deny system to create CLASSIC bucket. + + Default to **false**. + The `logging` object supports the following: diff --git a/huaweicloudstack/common/eps_management.go b/huaweicloudstack/common/eps_management.go new file mode 100644 index 00000000..ffc133d8 --- /dev/null +++ b/huaweicloudstack/common/eps_management.go @@ -0,0 +1,33 @@ +package common + +import ( + "fmt" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + + "github.com/chnsz/golangsdk/openstack/eps/v1/enterpriseprojects" + + "github.com/huaweicloud/terraform-provider-huaweicloud/huaweicloud/config" +) + +// MigrateEnterpriseProjectWithoutWait is a method that used to a migrate resource from an enterprise project to +// another. +// NOTE: Please read the following contents carefully before using this method. +// - This method only sends an asynchronous request and does not guarantee the result. +func MigrateEnterpriseProjectWithoutWait(cfg *config.Config, d *schema.ResourceData, + opts enterpriseprojects.MigrateResourceOpts) error { + targetEpsId := cfg.GetEnterpriseProjectID(d) + if targetEpsId == "" { + targetEpsId = "0" + } + + client, err := cfg.EnterpriseProjectClient(cfg.GetRegion(d)) + if err != nil { + return fmt.Errorf("error creating EPS client: %s", err) + } + _, err = enterpriseprojects.Migrate(client, opts, targetEpsId).Extract() + if err != nil { + return fmt.Errorf("failed to migrate resource (%s) to the enterprise project (%s): %s", + opts.ResourceId, targetEpsId, err) + } + return nil +} diff --git a/huaweicloudstack/provider.go b/huaweicloudstack/provider.go index 8849df76..fdee9343 100644 --- a/huaweicloudstack/provider.go +++ b/huaweicloudstack/provider.go @@ -36,6 +36,7 @@ import ( "github.com/huaweicloud/terraform-provider-hcs/huaweicloudstack/services/evs" "github.com/huaweicloud/terraform-provider-hcs/huaweicloudstack/services/ims" "github.com/huaweicloud/terraform-provider-hcs/huaweicloudstack/services/nat" + hcsObs "github.com/huaweicloud/terraform-provider-hcs/huaweicloudstack/services/obs" "github.com/huaweicloud/terraform-provider-hcs/huaweicloudstack/services/smn" "github.com/huaweicloud/terraform-provider-hcs/huaweicloudstack/services/vpc" "github.com/huaweicloud/terraform-provider-hcs/huaweicloudstack/services/vpcep" @@ -489,7 +490,7 @@ func Provider() *schema.Provider { "hcs_mrs_cluster": mrs.ResourceMRSClusterV2(), "hcs_mrs_job": mrs.ResourceMRSJobV2(), - "hcs_obs_bucket": obs.ResourceObsBucket(), + "hcs_obs_bucket": hcsObs.ResourceObsBucket(), "hcs_obs_bucket_acl": obs.ResourceOBSBucketAcl(), "hcs_obs_bucket_object": obs.ResourceObsBucketObject(), "hcs_obs_bucket_object_acl": obs.ResourceOBSBucketObjectAcl(), diff --git a/huaweicloudstack/services/obs/resource_hcs_obs_bucket.go b/huaweicloudstack/services/obs/resource_hcs_obs_bucket.go new file mode 100644 index 00000000..25888407 --- /dev/null +++ b/huaweicloudstack/services/obs/resource_hcs_obs_bucket.go @@ -0,0 +1,1801 @@ +package obs + +import ( + "bytes" + "context" + "encoding/json" + "fmt" + "log" + "net/url" + "strings" + "time" + + "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + + "github.com/chnsz/golangsdk" + "github.com/chnsz/golangsdk/openstack/eps/v1/enterpriseprojects" + "github.com/chnsz/golangsdk/openstack/obs" + + "github.com/huaweicloud/terraform-provider-hcs/huaweicloudstack/common" + "github.com/huaweicloud/terraform-provider-hcs/huaweicloudstack/config" + "github.com/huaweicloud/terraform-provider-hcs/huaweicloudstack/helper/hashcode" + "github.com/huaweicloud/terraform-provider-hcs/huaweicloudstack/utils" +) + +func ResourceObsBucket() *schema.Resource { + return &schema.Resource{ + CreateContext: resourceObsBucketCreate, + ReadContext: resourceObsBucketRead, + UpdateContext: resourceObsBucketUpdate, + DeleteContext: resourceObsBucketDelete, + Importer: &schema.ResourceImporter{ + StateContext: resourceObsBucketImport, + }, + + Schema: map[string]*schema.Schema{ + "bucket": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "storage_class": { + Type: schema.TypeString, + Optional: true, + Default: "STANDARD", + }, + + "acl": { + Type: schema.TypeString, + Optional: true, + Default: "private", + }, + + "policy": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: utils.ValidateJsonString, + DiffSuppressFunc: utils.SuppressEquivalentAwsPolicyDiffs, + }, + + "policy_format": { + Type: schema.TypeString, + Optional: true, + Default: "obs", + }, + + "versioning": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "logging": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "target_bucket": { + Type: schema.TypeString, + Required: true, + }, + "target_prefix": { + Type: schema.TypeString, + Optional: true, + Default: "logs/", + }, + }, + }, + }, + + "quota": { + Type: schema.TypeInt, + Optional: true, + Default: 0, + ValidateFunc: validation.IntAtLeast(0), + }, + + "storage_info": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "size": { + Type: schema.TypeInt, + Computed: true, + }, + "object_number": { + Type: schema.TypeInt, + Computed: true, + }, + }, + }, + }, + + "lifecycle_rule": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "enabled": { + Type: schema.TypeBool, + Required: true, + }, + "prefix": { + Type: schema.TypeString, + Optional: true, + }, + "expiration": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "days": { + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + "transition": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "days": { + Type: schema.TypeInt, + Required: true, + }, + "storage_class": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "noncurrent_version_expiration": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "days": { + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + "abort_incomplete_multipart_upload": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "days": { + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + "noncurrent_version_transition": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "days": { + Type: schema.TypeInt, + Required: true, + }, + "storage_class": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + }, + }, + }, + + "website": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "index_document": { + Type: schema.TypeString, + Optional: true, + }, + + "error_document": { + Type: schema.TypeString, + Optional: true, + }, + + "redirect_all_requests_to": { + Type: schema.TypeString, + ConflictsWith: []string{ + "website.0.index_document", + "website.0.error_document", + "website.0.routing_rules", + }, + Optional: true, + }, + + "routing_rules": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: utils.ValidateJsonString, + StateFunc: func(v interface{}) string { + jsonString, _ := utils.NormalizeJsonString(v) + return jsonString + }, + }, + }, + }, + }, + + "cors_rule": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "allowed_origins": { + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "allowed_methods": { + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "allowed_headers": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "expose_headers": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "max_age_seconds": { + Type: schema.TypeInt, + Optional: true, + Default: 100, + }, + }, + }, + }, + + "tags": common.TagsSchema(), + "force_destroy": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "region": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "multi_az": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + ForceNew: true, + }, + "parallel_fs": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + "encryption": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "kms_key_id": { + Type: schema.TypeString, + Optional: true, + }, + "kms_key_project_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "enterprise_project_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "user_domain_names": { + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Optional: true, + Computed: true, + }, + + "bucket_redundancy": { + Type: schema.TypeString, + Optional: true, + Default: obs.BucketRedundancyClassic, + }, + "fusion_allow_upgrade": { + Type: schema.TypeBool, + Default: false, + Optional: true, + }, + "fusion_allow_alternative": { + Type: schema.TypeBool, + Default: false, + Optional: true, + }, + + "bucket_domain_name": { + Type: schema.TypeString, + Computed: true, + }, + "bucket_version": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceObsBucketCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conf := meta.(*config.Config) + region := conf.GetRegion(d) + obsClient, err := conf.ObjectStorageClient(region) + if err != nil { + return diag.Errorf("Error creating OBS client: %s", err) + } + + bucket := d.Get("bucket").(string) + acl := d.Get("acl").(string) + class := d.Get("storage_class").(string) + bucketRedundancy := d.Get("bucket_redundancy").(string) + fusionAllowUpgrade := d.Get("fusion_allow_upgrade").(bool) + fusionAllowAlternative := d.Get("fusion_allow_alternative").(bool) + opts := &obs.CreateBucketInput{ + Bucket: bucket, + ACL: obs.AclType(acl), + StorageClass: obs.StorageClassType(class), + IsFSFileInterface: d.Get("parallel_fs").(bool), + Epid: conf.GetEnterpriseProjectID(d), + BucketRedundancy: obs.BucketRedundancyType(bucketRedundancy), + IsFusionAllowUpgrade: fusionAllowUpgrade, + IsRedundancyAllowALT: fusionAllowAlternative, + } + opts.Location = region + if _, ok := d.GetOk("multi_az"); ok { + opts.AvailableZone = "3az" + } + + log.Printf("[DEBUG] OBS bucket create opts: %#v", opts) + _, err = obsClient.CreateBucket(opts) + if err != nil { + return diag.FromErr(getObsError("Error creating bucket", bucket, err)) + } + + // Assign the bucket name as the resource ID + d.SetId(bucket) + return resourceObsBucketUpdate(ctx, d, meta) +} + +func resourceObsBucketUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conf := meta.(*config.Config) + region := conf.GetRegion(d) + obsClient, err := conf.ObjectStorageClient(region) + if err != nil { + return diag.Errorf("Error creating OBS client: %s", err) + } + + obsClientWithSignature, err := conf.ObjectStorageClientWithSignature(region) + if err != nil { + return diag.Errorf("Error creating OBS client with signature: %s", err) + } + + log.Printf("[DEBUG] Update OBS bucket %s", d.Id()) + if d.HasChange("acl") && !d.IsNewResource() { + if err := updateObsBucketAcl(obsClient, d); err != nil { + return diag.FromErr(err) + } + } + + if d.HasChange("storage_class") && !d.IsNewResource() { + if err := resourceObsBucketClassUpdate(obsClient, d); err != nil { + return diag.FromErr(err) + } + } + + if d.HasChange("policy") { + policyClient := obsClientWithSignature + if d.Get("policy_format").(string) != "obs" { + policyClient = obsClient + } + if err := resourceObsBucketPolicyUpdate(policyClient, d); err != nil { + return diag.FromErr(err) + } + } + + if d.HasChange("tags") { + if err := resourceObsBucketTagsUpdate(obsClient, d); err != nil { + return diag.FromErr(err) + } + } + + if d.HasChange("versioning") { + if err := resourceObsBucketVersioningUpdate(obsClient, d); err != nil { + return diag.FromErr(err) + } + } + + if d.HasChanges("encryption", "kms_key_id", "kms_key_project_id") { + if err := resourceObsBucketEncryptionUpdate(conf, obsClientWithSignature, d); err != nil { + return diag.FromErr(err) + } + } + + if d.HasChange("logging") { + if err := resourceObsBucketLoggingUpdate(obsClient, d); err != nil { + return diag.FromErr(err) + } + } + + if d.HasChange("quota") { + if err := resourceObsBucketQuotaUpdate(obsClient, d); err != nil { + return diag.FromErr(err) + } + } + + if d.HasChange("lifecycle_rule") { + if err := resourceObsBucketLifecycleUpdate(obsClient, d); err != nil { + return diag.FromErr(err) + } + } + + if d.HasChange("website") { + if err := resourceObsBucketWebsiteUpdate(obsClient, d); err != nil { + return diag.FromErr(err) + } + } + + if d.HasChange("cors_rule") { + if err := resourceObsBucketCorsUpdate(obsClient, d); err != nil { + return diag.FromErr(err) + } + } + + if d.HasChange("enterprise_project_id") && !d.IsNewResource() { + // API Limitations: still requires `project_id` field when migrating the EPS of OBS bucket + if err := resourceObsBucketEnterpriseProjectIdUpdate(ctx, d, conf, obsClient, region); err != nil { + return diag.FromErr(err) + } + + } + + if d.HasChange("user_domain_names") { + if err := resourceObsBucketUserDomainNamesUpdate(obsClient, d); err != nil { + return diag.FromErr(err) + } + } + + if d.HasChange("bucket_redundancy") || d.HasChange("fusion_allow_upgrade") || d.HasChange("fusion_allow_alternative") { + if err := resourceObsBucketRedundancyUpdate(obsClient, d); err != nil { + return diag.FromErr(err) + } + } + + return resourceObsBucketRead(ctx, d, meta) +} + +func resourceObsBucketRead(_ context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conf := meta.(*config.Config) + region := conf.GetRegion(d) + obsClient, err := conf.ObjectStorageClient(region) + if err != nil { + return diag.Errorf("Error creating OBS client: %s", err) + } + + bucket := d.Id() + log.Printf("[DEBUG] Read OBS bucket: %s", bucket) + _, err = obsClient.HeadBucket(bucket) + if err != nil { + if obsError, ok := err.(obs.ObsError); ok && obsError.StatusCode == 404 { + d.SetId("") + return diag.Diagnostics{ + diag.Diagnostic{ + Severity: diag.Warning, + Summary: "Resource not found", + Detail: fmt.Sprintf("OBS bucket(%s) not found", bucket), + }, + } + } + return diag.Errorf("error reading OBS bucket %s: %s", bucket, err) + } + + mErr := &multierror.Error{} + // for import case + if _, ok := d.GetOk("bucket"); !ok { + mErr = multierror.Append(mErr, d.Set("bucket", bucket)) + } + + mErr = multierror.Append(mErr, + d.Set("region", region), + d.Set("bucket_domain_name", bucketDomainNameWithCloud(d.Get("bucket").(string), region, conf.Cloud)), + ) + if mErr.ErrorOrNil() != nil { + return diag.Errorf("error setting OBS attributes: %s", mErr) + } + + // Read storage class + if err := setObsBucketStorageClass(obsClient, d); err != nil { + return diag.FromErr(err) + } + + // Read enterprise project id, multi_az and parallel_fs + if err := setObsBucketMetadata(obsClient, d); err != nil { + return diag.FromErr(err) + } + + // Read the versioning + if err := setObsBucketVersioning(obsClient, d); err != nil { + return diag.FromErr(err) + } + + // Read the encryption configuration + if err := setObsBucketEncryption(obsClient, d); err != nil { + return diag.FromErr(err) + } + + // Read the logging configuration + if err := setObsBucketLogging(obsClient, d); err != nil { + return diag.FromErr(err) + } + + // Read the quota + if err := setObsBucketQuota(obsClient, d); err != nil { + return diag.FromErr(err) + } + + // Read the Lifecycle configuration + if err := setObsBucketLifecycleConfiguration(obsClient, d); err != nil { + return diag.FromErr(err) + } + + // Read the website configuration + if err := setObsBucketWebsiteConfiguration(obsClient, d); err != nil { + return diag.FromErr(err) + } + + // Read the CORS rules + if err := setObsBucketCorsRules(obsClient, d); err != nil { + return diag.FromErr(err) + } + + // Read the bucket policy + policyClient := obsClient + format := d.Get("policy_format").(string) + if format == "obs" { + policyClient, err = conf.ObjectStorageClientWithSignature(region) + if err != nil { + return diag.Errorf("Error creating OBS policy client: %s", err) + } + } + if err := setObsBucketPolicy(policyClient, d); err != nil { + return diag.FromErr(err) + } + + // Read the tags + if err := setObsBucketTags(obsClient, d); err != nil { + return diag.FromErr(err) + } + + // Read the storage info + if err := setObsBucketStorageInfo(obsClient, d); err != nil { + return diag.FromErr(err) + } + + if err := setObsBucketUserDomainNames(obsClient, d); err != nil { + return diag.FromErr(err) + } + + return nil +} + +func resourceObsBucketDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conf := meta.(*config.Config) + obsClient, err := conf.ObjectStorageClient(conf.GetRegion(d)) + if err != nil { + return diag.Errorf("Error creating OBS client: %s", err) + } + + bucket := d.Id() + log.Printf("[DEBUG] deleting OBS Bucket: %s", bucket) + _, err = obsClient.DeleteBucket(bucket) + if err != nil { + obsError, ok := err.(obs.ObsError) + if !ok { + return diag.Errorf("Error deleting OBS bucket %s, %s", bucket, err) + } + if obsError.StatusCode == 404 { + return common.CheckDeletedDiag(d, golangsdk.ErrDefault404{}, "OBS bucket") + } + if obsError.Code == "BucketNotEmpty" { + log.Printf("[WARN] OBS bucket: %s is not empty", bucket) + if d.Get("force_destroy").(bool) { + err = deleteAllBucketObjects(obsClient, bucket) + if err == nil { + log.Printf("[WARN] all objects of %s have been deleted, and try again", bucket) + return resourceObsBucketDelete(ctx, d, meta) + } + } + return diag.FromErr(err) + } + } + return nil +} + +func resourceObsBucketTagsUpdate(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + tagMap := d.Get("tags").(map[string]interface{}) + var tagList []obs.Tag + for k, v := range tagMap { + tag := obs.Tag{ + Key: k, + Value: v.(string), + } + tagList = append(tagList, tag) + } + + req := &obs.SetBucketTaggingInput{} + req.Bucket = bucket + req.Tags = tagList + log.Printf("[DEBUG] set tags of OBS bucket %s: %#v", bucket, req) + + _, err := obsClient.SetBucketTagging(req) + if err != nil { + return getObsError("Error updating tags of OBS bucket", bucket, err) + } + return nil +} + +func updateObsBucketAcl(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + acl := d.Get("acl").(string) + + input := &obs.SetBucketAclInput{ + Bucket: bucket, + ACL: obs.AclType(acl), + } + log.Printf("[DEBUG] set ACL of OBS bucket %s: %#v", bucket, input) + + _, err := obsClient.SetBucketAcl(input) + if err != nil { + return getObsError("Error updating acl of OBS bucket", bucket, err) + } + + // acl policy can not be retrieved by obsClient.GetBucketAcl method + if err := d.Set("acl", acl); err != nil { + return fmt.Errorf("error saving acl of OBS bucket %s: %s", bucket, err) + } + return nil +} + +func resourceObsBucketPolicyUpdate(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + policy := d.Get("policy").(string) + + if policy != "" { + log.Printf("[DEBUG] OBS bucket: %s, set policy: %s", bucket, policy) + params := &obs.SetBucketPolicyInput{ + Bucket: bucket, + Policy: policy, + } + + if _, err := obsClient.SetBucketPolicy(params); err != nil { + return getObsError("Error setting OBS bucket policy", bucket, err) + } + } else { + log.Printf("[DEBUG] OBS bucket: %s, delete policy", bucket) + _, err := obsClient.DeleteBucketPolicy(bucket) + if err != nil { + return getObsError("Error deleting policy of OBS bucket", bucket, err) + } + } + + return nil +} + +func resourceObsBucketClassUpdate(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + class := d.Get("storage_class").(string) + + input := &obs.SetBucketStoragePolicyInput{} + input.Bucket = bucket + input.StorageClass = obs.StorageClassType(class) + log.Printf("[DEBUG] set storage class of OBS bucket %s: %#v", bucket, input) + + _, err := obsClient.SetBucketStoragePolicy(input) + if err != nil { + return getObsError("Error updating storage class of OBS bucket", bucket, err) + } + + return nil +} + +func resourceObsBucketVersioningUpdate(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + version := d.Get("versioning").(bool) + + input := &obs.SetBucketVersioningInput{} + input.Bucket = bucket + if version { + input.Status = obs.VersioningStatusEnabled + } else { + input.Status = obs.VersioningStatusSuspended + } + log.Printf("[DEBUG] set versioning of OBS bucket %s: %#v", bucket, input) + + _, err := obsClient.SetBucketVersioning(input) + if err != nil { + return getObsError("Error setting versioning status of OBS bucket", bucket, err) + } + + return nil +} + +func resourceObsBucketEncryptionUpdate(config *config.Config, obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + + if d.Get("encryption").(bool) { + input := &obs.SetBucketEncryptionInput{} + input.Bucket = bucket + input.SSEAlgorithm = obs.DEFAULT_SSE_KMS_ENCRYPTION_OBS + input.KMSMasterKeyID = d.Get("kms_key_id").(string) + + if _, ok := d.GetOk("kms_key_id"); ok { + if v, ok := d.GetOk("kms_key_project_id"); ok { + input.ProjectID = v.(string) + } else { + input.ProjectID = config.GetProjectID(config.GetRegion(d)) + } + } + + log.Printf("[DEBUG] enable default encryption of OBS bucket %s: %#v", bucket, input) + _, err := obsClient.SetBucketEncryption(input) + if err != nil { + return getObsError("failed to enable default encryption of OBS bucket", bucket, err) + } + } else if !d.IsNewResource() { + _, err := obsClient.DeleteBucketEncryption(bucket) + if err != nil { + return getObsError("failed to disable default encryption of OBS bucket", bucket, err) + } + } + + return nil +} + +func resourceObsBucketLoggingUpdate(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + rawLogging := d.Get("logging").(*schema.Set).List() + loggingStatus := &obs.SetBucketLoggingConfigurationInput{} + loggingStatus.Bucket = bucket + + if len(rawLogging) > 0 { + c := rawLogging[0].(map[string]interface{}) + if val := c["target_bucket"].(string); val != "" { + loggingStatus.TargetBucket = val + } + + if val := c["target_prefix"].(string); val != "" { + loggingStatus.TargetPrefix = val + } + } + log.Printf("[DEBUG] set logging of OBS bucket %s: %#v", bucket, loggingStatus) + + _, err := obsClient.SetBucketLoggingConfiguration(loggingStatus) + if err != nil { + return getObsError("Error setting logging configuration of OBS bucket", bucket, err) + } + + return nil +} + +func resourceObsBucketQuotaUpdate(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + quota := d.Get("quota").(int) + quotaInput := &obs.SetBucketQuotaInput{} + quotaInput.Bucket = bucket + quotaInput.BucketQuota.Quota = int64(quota) + + _, err := obsClient.SetBucketQuota(quotaInput) + if err != nil { + return getObsError("Error setting quota of OBS bucket", bucket, err) + } + + return nil + +} + +func resourceObsBucketLifecycleUpdate(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + lifecycleRules := d.Get("lifecycle_rule").([]interface{}) + + if len(lifecycleRules) == 0 { + log.Printf("[DEBUG] remove all lifecycle rules of bucket %s", bucket) + _, err := obsClient.DeleteBucketLifecycleConfiguration(bucket) + if err != nil { + return getObsError("Error deleting lifecycle rules of OBS bucket", bucket, err) + } + return nil + } + + rules := make([]obs.LifecycleRule, len(lifecycleRules)) + for i, lifecycleRule := range lifecycleRules { + r := lifecycleRule.(map[string]interface{}) + + // rule ID + rules[i].ID = r["name"].(string) + + // Enabled + if val, ok := r["enabled"].(bool); ok && val { + rules[i].Status = obs.RuleStatusEnabled + } else { + rules[i].Status = obs.RuleStatusDisabled + } + + // Prefix + rules[i].Prefix = r["prefix"].(string) + + // Expiration + expiration := d.Get(fmt.Sprintf("lifecycle_rule.%d.expiration", i)).(*schema.Set).List() + if len(expiration) > 0 { + raw := expiration[0].(map[string]interface{}) + exp := &rules[i].Expiration + + if val, ok := raw["days"].(int); ok && val > 0 { + exp.Days = val + } + } + + // Transition + transitions := d.Get(fmt.Sprintf("lifecycle_rule.%d.transition", i)).([]interface{}) + list := make([]obs.Transition, len(transitions)) + for j, tran := range transitions { + raw := tran.(map[string]interface{}) + + if val, ok := raw["days"].(int); ok && val > 0 { + list[j].Days = val + } + if val, ok := raw["storage_class"].(string); ok { + list[j].StorageClass = obs.StorageClassType(val) + } + } + rules[i].Transitions = list + + // NoncurrentVersionExpiration + ncExpiration := d.Get(fmt.Sprintf("lifecycle_rule.%d.noncurrent_version_expiration", i)).(*schema.Set).List() + if len(ncExpiration) > 0 { + raw := ncExpiration[0].(map[string]interface{}) + ncExp := &rules[i].NoncurrentVersionExpiration + + if val, ok := raw["days"].(int); ok && val > 0 { + ncExp.NoncurrentDays = val + } + } + + // AbortIncompleteMultipartUpload + abortIncompleteMultipartUpload := d.Get(fmt.Sprintf("lifecycle_rule.%d.abort_incomplete_multipart_upload", + i)).(*schema.Set).List() + if len(abortIncompleteMultipartUpload) > 0 { + raw := abortIncompleteMultipartUpload[0].(map[string]interface{}) + abincomMultipartUpload := &rules[i].AbortIncompleteMultipartUpload + + if val, ok := raw["days"].(int); ok && val > 0 { + abincomMultipartUpload.DaysAfterInitiation = val + } + } + + // NoncurrentVersionTransition + ncTransitions := d.Get(fmt.Sprintf("lifecycle_rule.%d.noncurrent_version_transition", i)).([]interface{}) + ncList := make([]obs.NoncurrentVersionTransition, len(ncTransitions)) + for j, ncTran := range ncTransitions { + raw := ncTran.(map[string]interface{}) + + if val, ok := raw["days"].(int); ok && val > 0 { + ncList[j].NoncurrentDays = val + } + if val, ok := raw["storage_class"].(string); ok { + ncList[j].StorageClass = obs.StorageClassType(val) + } + } + rules[i].NoncurrentVersionTransitions = ncList + } + + opts := &obs.SetBucketLifecycleConfigurationInput{} + opts.Bucket = bucket + opts.LifecycleRules = rules + log.Printf("[DEBUG] set lifecycle configurations of OBS bucket %s: %#v", bucket, opts) + + _, err := obsClient.SetBucketLifecycleConfiguration(opts) + if err != nil { + return getObsError("Error setting lifecycle rules of OBS bucket", bucket, err) + } + + return nil +} + +func resourceObsBucketWebsiteUpdate(obsClient *obs.ObsClient, d *schema.ResourceData) error { + ws := d.Get("website").([]interface{}) + + if len(ws) == 1 { + var w map[string]interface{} + if ws[0] != nil { + w = ws[0].(map[string]interface{}) + } else { + w = make(map[string]interface{}) + } + return resourceObsBucketWebsitePut(obsClient, d, w) + } + if len(ws) == 0 { + return resourceObsBucketWebsiteDelete(obsClient, d) + } + return fmt.Errorf("cannot specify more than one website") +} + +func resourceObsBucketCorsUpdate(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + rawCors := d.Get("cors_rule").([]interface{}) + + if len(rawCors) == 0 { + // Delete CORS + log.Printf("[DEBUG] delete CORS rules of OBS bucket: %s", bucket) + _, err := obsClient.DeleteBucketCors(bucket) + if err != nil { + return getObsError("Error deleting CORS rules of OBS bucket", bucket, err) + } + return nil + } + + // set CORS + rules := make([]obs.CorsRule, 0, len(rawCors)) + for _, cors := range rawCors { + corsMap := cors.(map[string]interface{}) + r := obs.CorsRule{} + for k, v := range corsMap { + if k == "max_age_seconds" { + r.MaxAgeSeconds = v.(int) + } else { + vMap := make([]string, len(v.([]interface{}))) + for i, vv := range v.([]interface{}) { + vMap[i] = vv.(string) + } + switch k { + case "allowed_headers": + r.AllowedHeader = vMap + case "allowed_methods": + r.AllowedMethod = vMap + case "allowed_origins": + r.AllowedOrigin = vMap + case "expose_headers": + r.ExposeHeader = vMap + } + } + } + log.Printf("[DEBUG] set CORS of OBS bucket %s: %#v", bucket, r) + rules = append(rules, r) + } + + corsInput := &obs.SetBucketCorsInput{} + corsInput.Bucket = bucket + corsInput.CorsRules = rules + log.Printf("[DEBUG] OBS bucket: %s, put CORS: %#v", bucket, corsInput) + + _, err := obsClient.SetBucketCors(corsInput) + if err != nil { + return getObsError("Error setting CORS rules of OBS bucket", bucket, err) + } + return nil +} + +func resourceObsBucketUserDomainNamesUpdate(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + oldRaws, newRaws := d.GetChange("user_domain_names") + addRaws := newRaws.(*schema.Set).Difference(oldRaws.(*schema.Set)) + removeRaws := oldRaws.(*schema.Set).Difference(newRaws.(*schema.Set)) + + if err := deleteObsBucketUserDomainNames(obsClient, bucket, removeRaws); err != nil { + return err + } + return createObsBucketUserDomainNames(obsClient, bucket, addRaws) +} + +func createObsBucketUserDomainNames(obsClient *obs.ObsClient, bucket string, domainNameSet *schema.Set) error { + for _, domainName := range domainNameSet.List() { + input := &obs.SetBucketCustomDomainInput{ + Bucket: bucket, + CustomDomain: domainName.(string), + } + _, err := obsClient.SetBucketCustomDomain(input) + if err != nil { + return getObsError("error setting user domain name of OBS bucket", bucket, err) + } + } + return nil +} + +func deleteObsBucketUserDomainNames(obsClient *obs.ObsClient, bucket string, domainNameSet *schema.Set) error { + for _, domainName := range domainNameSet.List() { + input := &obs.DeleteBucketCustomDomainInput{ + Bucket: bucket, + CustomDomain: domainName.(string), + } + _, err := obsClient.DeleteBucketCustomDomain(input) + if err != nil { + return getObsError("error deleting user domain name of OBS bucket", bucket, err) + } + } + return nil +} + +func resourceObsBucketEnterpriseProjectIdUpdate(ctx context.Context, d *schema.ResourceData, conf *config.Config, + obsClient *obs.ObsClient, region string) error { + var ( + projectId = conf.GetProjectID(region) + bucket = d.Get("bucket").(string) + migrateOpts = enterpriseprojects.MigrateResourceOpts{ + ResourceId: bucket, + ResourceType: "bucket", + RegionId: region, + ProjectId: projectId, + } + ) + err := common.MigrateEnterpriseProjectWithoutWait(conf, d, migrateOpts) + if err != nil { + return err + } + + // After the EPS service side updates enterprise project ID, it will take a few time to wait the OBS service + // read the data back into the database. + stateConf := &resource.StateChangeConf{ + Pending: []string{"Pending"}, + Target: []string{"Success"}, + Refresh: waitForOBSEnterpriseProjectIdChanged(obsClient, bucket, d.Get("enterprise_project_id").(string)), + Timeout: d.Timeout(schema.TimeoutUpdate), + Delay: 10 * time.Second, + PollInterval: 5 * time.Second, + } + _, err = stateConf.WaitForStateContext(ctx) + if err != nil { + return getObsError("error waiting for obs enterprise project ID changed", bucket, err) + } + return nil +} + +func waitForOBSEnterpriseProjectIdChanged(obsClient *obs.ObsClient, bucket string, enterpriseProjectId string) resource.StateRefreshFunc { + return func() (result interface{}, state string, err error) { + input := &obs.GetBucketMetadataInput{ + Bucket: bucket, + } + output, err := obsClient.GetBucketMetadata(input) + if err != nil { + return nil, "Error", err + } + + if output.Epid == enterpriseProjectId { + log.Printf("[DEBUG] the Enterprise Project ID of bucket %s is migrated to %s", bucket, enterpriseProjectId) + return output, "Success", nil + } + + return output, "Pending", nil + } +} + +func resourceObsBucketRedundancyUpdate(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + bucketRedundancy := d.Get("bucket_redundancy").(string) + fusionAllowUpgrade := d.Get("fusion_allow_upgrade").(bool) + fusionAllowAlt := d.Get("fusion_allow_alternative").(bool) + + input := &obs.CreateBucketInput{ + Bucket: bucket, + BucketRedundancy: obs.BucketRedundancyType(bucketRedundancy), + IsFusionAllowUpgrade: fusionAllowUpgrade, + IsRedundancyAllowALT: fusionAllowAlt, + } + _, err := obsClient.CreateBucket(input) + if err != nil { + return err + } + + return nil +} + +func resourceObsBucketWebsitePut(obsClient *obs.ObsClient, d *schema.ResourceData, website map[string]interface{}) error { + bucket := d.Get("bucket").(string) + + var indexDocument, errorDocument, redirectAllRequestsTo, routingRules string + if v, ok := website["index_document"]; ok { + indexDocument = v.(string) + } + if v, ok := website["error_document"]; ok { + errorDocument = v.(string) + } + if v, ok := website["redirect_all_requests_to"]; ok { + redirectAllRequestsTo = v.(string) + } + if v, ok := website["routing_rules"]; ok { + routingRules = v.(string) + } + + if indexDocument == "" && redirectAllRequestsTo == "" { + return fmt.Errorf("must specify either index_document or redirect_all_requests_to") + } + + websiteConfiguration := &obs.SetBucketWebsiteConfigurationInput{} + websiteConfiguration.Bucket = bucket + + if indexDocument != "" { + websiteConfiguration.IndexDocument = obs.IndexDocument{ + Suffix: indexDocument, + } + } + + if errorDocument != "" { + websiteConfiguration.ErrorDocument = obs.ErrorDocument{ + Key: errorDocument, + } + } + + if redirectAllRequestsTo != "" { + redirect, err := url.Parse(redirectAllRequestsTo) + if err == nil && redirect.Scheme != "" { + var redirectHostBuf bytes.Buffer + redirectHostBuf.WriteString(redirect.Host) + if redirect.Path != "" { + redirectHostBuf.WriteString(redirect.Path) + } + websiteConfiguration.RedirectAllRequestsTo = obs.RedirectAllRequestsTo{ + HostName: redirectHostBuf.String(), + Protocol: obs.ProtocolType(redirect.Scheme), + } + } else { + websiteConfiguration.RedirectAllRequestsTo = obs.RedirectAllRequestsTo{ + HostName: redirectAllRequestsTo, + } + } + } + + if routingRules != "" { + var unmarshalRules []obs.RoutingRule + if err := json.Unmarshal([]byte(routingRules), &unmarshalRules); err != nil { + return err + } + websiteConfiguration.RoutingRules = unmarshalRules + } + + log.Printf("[DEBUG] set website configuration of OBS bucket %s: %#v", bucket, websiteConfiguration) + _, err := obsClient.SetBucketWebsiteConfiguration(websiteConfiguration) + if err != nil { + return getObsError("Error updating website configuration of OBS bucket", bucket, err) + } + + return nil +} + +func resourceObsBucketWebsiteDelete(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Get("bucket").(string) + + log.Printf("[DEBUG] delete website configuration of OBS bucket %s", bucket) + _, err := obsClient.DeleteBucketWebsiteConfiguration(bucket) + if err != nil { + return getObsError("Error deleting website configuration of OBS bucket", bucket, err) + } + + return nil +} + +func setObsBucketStorageClass(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + output, err := obsClient.GetBucketStoragePolicy(bucket) + if err != nil { + log.Printf("[WARN] Error getting storage class of OBS bucket %s: %s", bucket, err) + } else { + class := output.StorageClass + log.Printf("[DEBUG] getting storage class of OBS bucket %s: %s", bucket, class) + if err := d.Set("storage_class", normalizeStorageClass(class)); err != nil { + return fmt.Errorf("error saving storage class of OBS bucket %s: %s", bucket, err) + } + } + + return nil +} + +func setObsBucketMetadata(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + input := &obs.GetBucketMetadataInput{ + Bucket: bucket, + } + output, err := obsClient.GetBucketMetadata(input) + if err != nil { + return getObsError("Error getting metadata of OBS bucket", bucket, err) + } + log.Printf("[DEBUG] getting metadata of OBS bucket %s: %#v", bucket, output) + + mErr := multierror.Append(nil, d.Set("enterprise_project_id", output.Epid)) + + if output.AZRedundancy == "3az" { + mErr = multierror.Append(mErr, d.Set("multi_az", true)) + } else { + mErr = multierror.Append(mErr, d.Set("multi_az", false)) + } + + if output.FSStatus == "Enabled" { + mErr = multierror.Append(mErr, d.Set("parallel_fs", true)) + } else { + mErr = multierror.Append(mErr, + d.Set("parallel_fs", false), + d.Set("bucket_version", output.Version), + ) + } + + if mErr.ErrorOrNil() != nil { + return fmt.Errorf("error saving metadata of OBS bucket %s: %s", bucket, mErr) + } + + return nil +} + +func setObsBucketPolicy(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + output, err := obsClient.GetBucketPolicy(bucket) + if err != nil { + if obsError, ok := err.(obs.ObsError); ok { + if obsError.Code == "NoSuchBucketPolicy" { + if err := d.Set("policy", nil); err != nil { + return fmt.Errorf("error saving policy of OBS bucket %s: %s", bucket, err) + } + return nil + } + return fmt.Errorf("error getting policy of OBS bucket %s: %s", bucket, err) + } + return err + } + + pol := output.Policy + log.Printf("[DEBUG] getting policy of OBS bucket %s: %s", bucket, pol) + if err := d.Set("policy", pol); err != nil { + return fmt.Errorf("error saving policy of OBS bucket %s: %s", bucket, err) + } + + return nil +} + +func setObsBucketVersioning(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + output, err := obsClient.GetBucketVersioning(bucket) + if err != nil { + return getObsError("Error getting versioning status of OBS bucket", bucket, err) + } + + log.Printf("[DEBUG] getting versioning status of OBS bucket %s: %s", bucket, output.Status) + mErr := &multierror.Error{} + if output.Status == obs.VersioningStatusEnabled { + mErr = multierror.Append(mErr, d.Set("versioning", true)) + } else { + mErr = multierror.Append(mErr, d.Set("versioning", false)) + } + if mErr.ErrorOrNil() != nil { + return fmt.Errorf("error saving version of OBS bucket %s: %s", bucket, mErr) + } + + return nil +} + +func setObsBucketEncryption(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + output, err := obsClient.GetBucketEncryption(bucket) + if err != nil { + if obsError, ok := err.(obs.ObsError); ok { + if obsError.Code == "NoSuchEncryptionConfiguration" || obsError.Code == "FsNotSupport" { + mErr := multierror.Append(nil, + d.Set("encryption", false), + d.Set("kms_key_id", nil), + d.Set("kms_key_project_id", nil), + ) + if mErr.ErrorOrNil() != nil { + return fmt.Errorf("error saving encryption of OBS bucket %s: %s", bucket, mErr) + } + return nil + } + return fmt.Errorf("error getting encryption configuration of OBS bucket %s: %s", bucket, err) + } + return err + } + + log.Printf("[DEBUG] getting encryption configuration of OBS bucket %s: %+v", bucket, output.BucketEncryptionConfiguration) + mErr := &multierror.Error{} + if output.SSEAlgorithm != "" { + mErr = multierror.Append(mErr, + d.Set("encryption", true), + d.Set("kms_key_id", output.KMSMasterKeyID), + d.Set("kms_key_project_id", output.ProjectID), + ) + } else { + mErr = multierror.Append(mErr, + d.Set("encryption", false), + d.Set("kms_key_id", nil), + d.Set("kms_key_project_id", nil), + ) + } + if mErr.ErrorOrNil() != nil { + return fmt.Errorf("error saving encryption of OBS bucket %s: %s", bucket, mErr) + } + + return nil +} + +func setObsBucketLogging(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + output, err := obsClient.GetBucketLoggingConfiguration(bucket) + if err != nil { + return getObsError("Error getting logging configuration of OBS bucket", bucket, err) + } + + lcList := make([]map[string]interface{}, 0, 1) + logging := make(map[string]interface{}) + + if output.TargetBucket != "" { + logging["target_bucket"] = output.TargetBucket + if output.TargetPrefix != "" { + logging["target_prefix"] = output.TargetPrefix + } + lcList = append(lcList, logging) + } + log.Printf("[DEBUG] getting logging configuration of OBS bucket %s: %#v", bucket, lcList) + + if err := d.Set("logging", lcList); err != nil { + return fmt.Errorf("error saving logging configuration of OBS bucket %s: %s", bucket, err) + } + + return nil +} + +func setObsBucketQuota(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + output, err := obsClient.GetBucketQuota(bucket) + if err != nil { + return getObsError("Error getting quota of OBS bucket", bucket, err) + } + + log.Printf("[DEBUG] getting quota of OBS bucket %s: %d", bucket, output.Quota) + + if err := d.Set("quota", output.Quota); err != nil { + return fmt.Errorf("error saving quota of OBS bucket %s: %s", bucket, err) + } + + return nil +} + +func setObsBucketLifecycleConfiguration(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + output, err := obsClient.GetBucketLifecycleConfiguration(bucket) + if err != nil { + if obsError, ok := err.(obs.ObsError); ok { + if obsError.Code == "NoSuchLifecycleConfiguration" { + if err := d.Set("lifecycle_rule", nil); err != nil { + return fmt.Errorf("error saving lifecycle configuration of OBS bucket %s: %s", bucket, err) + } + return nil + } + return fmt.Errorf("error getting lifecycle configuration of OBS bucket %s: %s", bucket, err) + } + return err + } + + rawRules := output.LifecycleRules + log.Printf("[DEBUG] getting original lifecycle configuration of OBS bucket %s, lifecycle: %#v", bucket, rawRules) + + rules := make([]map[string]interface{}, 0, len(rawRules)) + for _, lifecycleRule := range rawRules { + rule := make(map[string]interface{}) + rule["name"] = lifecycleRule.ID + + // Enabled + if lifecycleRule.Status == obs.RuleStatusEnabled { + rule["enabled"] = true + } else { + rule["enabled"] = false + } + + if lifecycleRule.Prefix != "" { + rule["prefix"] = lifecycleRule.Prefix + } + + // expiration + if days := lifecycleRule.Expiration.Days; days > 0 { + e := make(map[string]interface{}) + e["days"] = days + rule["expiration"] = schema.NewSet(expirationHash, []interface{}{e}) + } + // transition + if len(lifecycleRule.Transitions) > 0 { + transitions := make([]interface{}, 0, len(lifecycleRule.Transitions)) + for _, v := range lifecycleRule.Transitions { + t := make(map[string]interface{}) + t["days"] = v.Days + t["storage_class"] = normalizeStorageClass(string(v.StorageClass)) + transitions = append(transitions, t) + } + rule["transition"] = transitions + } + + // noncurrent_version_expiration + if days := lifecycleRule.NoncurrentVersionExpiration.NoncurrentDays; days > 0 { + e := make(map[string]interface{}) + e["days"] = days + rule["noncurrent_version_expiration"] = schema.NewSet(expirationHash, []interface{}{e}) + } + + // abort_incomplete_multipart_upload + if days := lifecycleRule.AbortIncompleteMultipartUpload.DaysAfterInitiation; days > 0 { + a := make(map[string]interface{}) + a["days"] = days + rule["abort_incomplete_multipart_upload"] = schema.NewSet(expirationHash, []interface{}{a}) + } + + // noncurrent_version_transition + if len(lifecycleRule.NoncurrentVersionTransitions) > 0 { + transitions := make([]interface{}, 0, len(lifecycleRule.NoncurrentVersionTransitions)) + for _, v := range lifecycleRule.NoncurrentVersionTransitions { + t := make(map[string]interface{}) + t["days"] = v.NoncurrentDays + t["storage_class"] = normalizeStorageClass(string(v.StorageClass)) + transitions = append(transitions, t) + } + rule["noncurrent_version_transition"] = transitions + } + + rules = append(rules, rule) + } + + log.Printf("[DEBUG] saving lifecycle configuration of OBS bucket %s, lifecycle: %#v", bucket, rules) + if err := d.Set("lifecycle_rule", rules); err != nil { + return fmt.Errorf("error saving lifecycle configuration of OBS bucket %s: %s", bucket, err) + } + + return nil +} + +func setObsBucketWebsiteConfiguration(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + output, err := obsClient.GetBucketWebsiteConfiguration(bucket) + if err != nil { + if obsError, ok := err.(obs.ObsError); ok { + if obsError.Code == "NoSuchWebsiteConfiguration" { + if err := d.Set("website", nil); err != nil { + return fmt.Errorf("error saving website configuration of OBS bucket %s: %s", bucket, err) + } + return nil + } + return fmt.Errorf("error getting website configuration of OBS bucket %s: %s", bucket, err) + } + return err + } + + log.Printf("[DEBUG] getting original website configuration of OBS bucket %s, output: %#v", bucket, output.BucketWebsiteConfiguration) + var websites []map[string]interface{} + w := make(map[string]interface{}) + + w["index_document"] = output.IndexDocument.Suffix + w["error_document"] = output.ErrorDocument.Key + + // redirect_all_requests_to + v := output.RedirectAllRequestsTo + if string(v.Protocol) == "" { + w["redirect_all_requests_to"] = v.HostName + } else { + var host string + var path string + parsedHostName, err := url.Parse(v.HostName) + if err == nil { + host = parsedHostName.Host + path = parsedHostName.Path + } else { + host = v.HostName + path = "" + } + + w["redirect_all_requests_to"] = (&url.URL{ + Host: host, + Path: path, + Scheme: string(v.Protocol), + }).String() + } + + // routing_rules + rawRules := output.RoutingRules + if len(rawRules) > 0 { + rr, err := normalizeWebsiteRoutingRules(rawRules) + if err != nil { + return fmt.Errorf("error while marshaling website routing rules: %s", err) + } + w["routing_rules"] = rr + } + + websites = append(websites, w) + log.Printf("[DEBUG] saving website configuration of OBS bucket %s, website: %#v", bucket, websites) + if err := d.Set("website", websites); err != nil { + return fmt.Errorf("error saving website configuration of OBS bucket %s: %s", bucket, err) + } + return nil +} + +func setObsBucketCorsRules(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + output, err := obsClient.GetBucketCors(bucket) + if err != nil { + if obsError, ok := err.(obs.ObsError); ok { + if obsError.Code == "NoSuchCORSConfiguration" { + if err := d.Set("cors_rule", nil); err != nil { + return fmt.Errorf("error saving CORS rules of OBS bucket %s: %s", bucket, err) + } + return nil + } + return fmt.Errorf("error getting CORS configuration of OBS bucket %s: %s", bucket, err) + } + return err + } + + corsRules := output.CorsRules + log.Printf("[DEBUG] getting original CORS rules of OBS bucket %s, CORS: %#v", bucket, corsRules) + + rules := make([]map[string]interface{}, 0, len(corsRules)) + for _, ruleObject := range corsRules { + rule := make(map[string]interface{}) + rule["allowed_origins"] = ruleObject.AllowedOrigin + rule["allowed_methods"] = ruleObject.AllowedMethod + rule["max_age_seconds"] = ruleObject.MaxAgeSeconds + if ruleObject.AllowedHeader != nil { + rule["allowed_headers"] = ruleObject.AllowedHeader + } + if ruleObject.ExposeHeader != nil { + rule["expose_headers"] = ruleObject.ExposeHeader + } + + rules = append(rules, rule) + } + + log.Printf("[DEBUG] saving CORS rules of OBS bucket %s, CORS: %#v", bucket, rules) + if err := d.Set("cors_rule", rules); err != nil { + return fmt.Errorf("error saving CORS rules of OBS bucket %s: %s", bucket, err) + } + + return nil +} + +func setObsBucketTags(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + output, err := obsClient.GetBucketTagging(bucket) + if err != nil { + if obsError, ok := err.(obs.ObsError); ok { + if obsError.Code == "NoSuchTagSet" { + if err := d.Set("tags", nil); err != nil { + return fmt.Errorf("error saving tags of OBS bucket %s: %s", bucket, err) + } + return nil + } + return fmt.Errorf("error getting tags of OBS bucket %s: %s", bucket, err) + } + return err + } + + tagMap := make(map[string]string) + for _, tag := range output.Tags { + tagMap[tag.Key] = tag.Value + } + log.Printf("[DEBUG] getting tags of OBS bucket %s: %#v", bucket, tagMap) + if err := d.Set("tags", tagMap); err != nil { + return fmt.Errorf("error saving tags of OBS bucket %s: %s", bucket, err) + } + return nil +} + +func setObsBucketStorageInfo(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + output, err := obsClient.GetBucketStorageInfo(bucket) + if err != nil { + if _, ok := err.(obs.ObsError); ok { + return fmt.Errorf("error getting storage info of OBS bucket %s: %s", bucket, err) + } + return err + } + log.Printf("[DEBUG] getting storage info of OBS bucket %s: %#v", bucket, output) + + storages := make([]map[string]interface{}, 1) + storages[0] = map[string]interface{}{ + "size": output.Size, + "object_number": output.ObjectNumber, + } + + if err := d.Set("storage_info", storages); err != nil { + return fmt.Errorf("error saving storage info of OBS bucket %s: %s", bucket, err) + } + return nil +} + +func setObsBucketUserDomainNames(obsClient *obs.ObsClient, d *schema.ResourceData) error { + bucket := d.Id() + output, err := obsClient.GetBucketCustomDomain(bucket) + if err != nil { + return getObsError("Error getting user domain names of OBS bucket", bucket, err) + } + log.Printf("[DEBUG] getting user domain names of OBS bucket %s: %#v", bucket, output) + + domainNames := make([]string, len(output.Domains)) + for i, v := range output.Domains { + domainNames[i] = v.DomainName + } + return d.Set("user_domain_names", domainNames) +} + +func deleteAllBucketObjects(obsClient *obs.ObsClient, bucket string) error { + listOpts := &obs.ListObjectsInput{ + Bucket: bucket, + } + // list all objects + resp, err := obsClient.ListObjects(listOpts) + if err != nil { + return getObsError("Error listing objects of OBS bucket", bucket, err) + } + + objects := make([]obs.ObjectToDelete, len(resp.Contents)) + for i, content := range resp.Contents { + objects[i].Key = content.Key + } + + deleteOpts := &obs.DeleteObjectsInput{ + Bucket: bucket, + Objects: objects, + } + log.Printf("[DEBUG] objects of %s will be deleted: %v", bucket, objects) + output, err := obsClient.DeleteObjects(deleteOpts) + if err != nil { + return getObsError("Error deleting all objects of OBS bucket", bucket, err) + } + if len(output.Errors) > 0 { + return fmt.Errorf("error some objects are still exist in %s: %#v", bucket, output.Errors) + } + return nil +} + +func expirationHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + + if v, ok := m["days"]; ok { + buf.WriteString(fmt.Sprintf("%d-", v.(int))) + } + if v, ok := m["storage_class"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + return hashcode.String(buf.String()) +} + +func getObsError(action string, bucket string, err error) error { + if _, ok := err.(obs.ObsError); ok { + return fmt.Errorf("%s %s: %s", action, bucket, err) + } + return err +} + +// normalize format of storage class +func normalizeStorageClass(class string) string { + var ret string = class + + if class == "STANDARD_IA" { + ret = "WARM" + } else if class == "GLACIER" { + ret = "COLD" + } + return ret +} + +func normalizeWebsiteRoutingRules(w []obs.RoutingRule) (string, error) { + // transform []obs.RoutingRule to []WebsiteRoutingRule + websiteRules := make([]WebsiteRoutingRule, 0, len(w)) + for _, rawRule := range w { + rule := WebsiteRoutingRule{ + Condition: Condition{ + KeyPrefixEquals: rawRule.Condition.KeyPrefixEquals, + HttpErrorCodeReturnedEquals: rawRule.Condition.HttpErrorCodeReturnedEquals, + }, + Redirect: Redirect{ + Protocol: string(rawRule.Redirect.Protocol), + HostName: rawRule.Redirect.HostName, + HttpRedirectCode: rawRule.Redirect.HttpRedirectCode, + ReplaceKeyWith: rawRule.Redirect.ReplaceKeyWith, + ReplaceKeyPrefixWith: rawRule.Redirect.ReplaceKeyPrefixWith, + }, + } + websiteRules = append(websiteRules, rule) + } + + // normalize + withNulls, err := json.Marshal(websiteRules) + if err != nil { + return "", err + } + + var rules []map[string]interface{} + if err := json.Unmarshal(withNulls, &rules); err != nil { + return "", err + } + + var cleanRules []map[string]interface{} + for _, rule := range rules { + cleanRules = append(cleanRules, utils.RemoveNil(rule)) + } + + withoutNulls, err := json.Marshal(cleanRules) + if err != nil { + return "", err + } + + return string(withoutNulls), nil +} + +func bucketDomainNameWithCloud(bucket, region, cloud string) string { + return fmt.Sprintf("%s.obs.%s.%s", bucket, region, cloud) +} + +type Condition struct { + KeyPrefixEquals string `json:"KeyPrefixEquals,omitempty"` + HttpErrorCodeReturnedEquals string `json:"HttpErrorCodeReturnedEquals,omitempty"` +} + +type Redirect struct { + Protocol string `json:"Protocol,omitempty"` + HostName string `json:"HostName,omitempty"` + ReplaceKeyPrefixWith string `json:"ReplaceKeyPrefixWith,omitempty"` + ReplaceKeyWith string `json:"ReplaceKeyWith,omitempty"` + HttpRedirectCode string `json:"HttpRedirectCode,omitempty"` +} + +type WebsiteRoutingRule struct { + Condition Condition `json:"Condition,omitempty"` + Redirect Redirect `json:"Redirect"` +} + +func resourceObsBucketImport(_ context.Context, d *schema.ResourceData, _ interface{}) ([]*schema.ResourceData, error) { + var policyFormat = "obs" + parts := strings.SplitN(d.Id(), "/", 2) + if len(parts) == 2 { + policyFormat = parts[1] + } + + d.SetId(parts[0]) + mErr := multierror.Append(nil, d.Set("policy_format", policyFormat)) + if mErr.ErrorOrNil() != nil { + return nil, fmt.Errorf("error saving policy_format %s: %s", policyFormat, mErr) + } + + return []*schema.ResourceData{d}, nil +} From 92cef08170f46934e20a3a1979b8ca39eb638c7f Mon Sep 17 00:00:00 2001 From: zhaopanju Date: Thu, 29 Aug 2024 10:25:09 +0800 Subject: [PATCH 7/8] fix(ecs): create ecs instance with sys volume encrypt --- docs/resources/ecs_compute_instance.md | 56 ++++++++++++++++++- examples/ecs/encrypt-volume/main.tf | 33 ++++++++++- .../openstack/ecs/v1/cloudservers/requests.go | 4 +- .../ecs/resource_hcs_compute_instance.go | 32 ++++++++++- 4 files changed, 120 insertions(+), 5 deletions(-) diff --git a/docs/resources/ecs_compute_instance.md b/docs/resources/ecs_compute_instance.md index 056b5237..463ad2d1 100644 --- a/docs/resources/ecs_compute_instance.md +++ b/docs/resources/ecs_compute_instance.md @@ -393,6 +393,60 @@ resource "hcs_ecs_compute_instance" "ecs-userdata" { } ``` +### Instance with Encrypt Sys Volumes +``` +data "hcs_availability_zones" "test" { +} + +data "hcs_ecs_compute_flavors" "flavors" { + availability_zone = data.hcs_availability_zones.test.names[0] + cpu_core_count = 2 + memory_size = 4 +} + +data "hcs_vpc_subnets" "test" { + name = "subnet-32a8" +} + +data "hcs_ims_images" "test" { + name = "mini_image" +} + +data "hcs_networking_secgroups" "test" { + name = "default" +} + +resource "hcs_ecs_compute_instance" "ecs-userdata" { + name = "ecs-userdata" + description = "terraform test" + image_id = data.hcs_ims_images.test.images[0].id + flavor_id = data.hcs_ecs_compute_flavors.flavors.ids[0] + ext_boot_type = data.hcs_ecs_compute_flavors.test.flavors[0].ext_boot_type + security_group_ids = [data.hcs_networking_secgroups.test.security_groups[0].id] + availability_zone = data.hcs_availability_zones.test.names[0] + user_data = "xxxxxxxxxxxxxxxxxxxxxxx" + + network { + uuid = data.hcs_vpc_subnets.test.subnets[0].id + source_dest_check = false + } + + system_disk_type = "business_type_01" + system_disk_size = 10 + kms_key_id = "ce488d6a-6090-4f7f-a95b-4faf3ce0bad0" + encrypt_cipher = "AES256-XTS" + + data_disks { + kms_key_id = "ce488d6a-6090-4f7f-a95b-4faf3ce0bad0" + encrypt_cipher = "AES256-XTS" + type = "business_type_01" + size = "10" + } + delete_disks_on_termination = true + delete_eip_on_termination = true +} +``` + ## Argument Reference The following arguments are supported: @@ -505,7 +559,7 @@ The `data_disks` block supports: * `kms_key_id` - (Optional, String, ForceNew) Specifies the ID of a KMS key. This is used to encrypt the disk. -* `encrypt_cipher` - (Optional, String, ForceNew) Specifies the encrypt cipher of KMS. This value must be set to AES256-XTS or SM4-XTS when SM series cryptographic algorithms are used. When other cryptographic algorithms are used, this value must be AES256-XTS. +* `encrypt_cipher` - (Optional, String, ForceNew) Specifies the encrypt cipher of KMS. This value must be set to *AES256-XTS* or *SM4-XTS* when SM series cryptographic algorithms are used. When other cryptographic algorithms are used, this value must be *AES256-XTS*. This param must exist if *kms_key_id* exists The `bandwidth` block supports: diff --git a/examples/ecs/encrypt-volume/main.tf b/examples/ecs/encrypt-volume/main.tf index 87721121..03c48c55 100644 --- a/examples/ecs/encrypt-volume/main.tf +++ b/examples/ecs/encrypt-volume/main.tf @@ -19,8 +19,9 @@ data "hcs_networking_secgroups" "test" { name = var.secgroup_name } +# create ecs instance with encrypt data volume resource "hcs_ecs_compute_instance" "ecs-test" { - name = join("-", [var.ecs_name, "-encrypt-volume"]) + name = join("-", [var.ecs_name, "-data-volume-encrypt"]) description = var.ecs_description image_id = data.hcs_ims_images.test.images[0].id flavor_id = data.hcs_ecs_compute_flavors.test.ids[0] @@ -36,6 +37,36 @@ resource "hcs_ecs_compute_instance" "ecs-test" { system_disk_type = var.disk_type system_disk_size = var.system_disk_size + data_disks { + kms_key_id = var.kms_key_id + encrypt_cipher = var.encrypt_cipher + type = var.disk_type + size = var.data_disk_size + } + delete_disks_on_termination = true + delete_eip_on_termination = true +} + +# create ecs instance with encrypt system volume, flavor must be use cloud disk flavor +resource "hcs_ecs_compute_instance" "ecs-test" { + name = join("-", [var.ecs_name, "-sys-volume-encrypt"]) + description = var.ecs_description + image_id = data.hcs_ims_images.test.images[0].id + flavor_id = data.hcs_ecs_compute_flavors.test.ids[0] + ext_boot_type = data.hcs_ecs_compute_flavors.test.flavors[0].ext_boot_type + security_group_ids = [data.hcs_networking_secgroups.test.security_groups[0].id] + availability_zone = data.hcs_availability_zones.test.names[0] + enterprise_project_id = var.enterprise_project_id + + network { + uuid = data.hcs_vpc_subnets.test.subnets[0].id + source_dest_check = false + } + system_disk_type = var.disk_type + system_disk_size = var.system_disk_size + kms_key_id = var.kms_key_id + encrypt_cipher = var.encrypt_cipher + data_disks { kms_key_id = var.kms_key_id encrypt_cipher = var.encrypt_cipher diff --git a/huaweicloudstack/sdk/huaweicloud/openstack/ecs/v1/cloudservers/requests.go b/huaweicloudstack/sdk/huaweicloud/openstack/ecs/v1/cloudservers/requests.go index 24fb50ed..f8a27905 100644 --- a/huaweicloudstack/sdk/huaweicloud/openstack/ecs/v1/cloudservers/requests.go +++ b/huaweicloudstack/sdk/huaweicloud/openstack/ecs/v1/cloudservers/requests.go @@ -33,7 +33,7 @@ type CreateOpts struct { IsAutoRename *bool `json:"isAutoRename,omitempty"` - RootVolume RootVolume `json:"root_volume" required:"true"` + RootVolume RootVolume `json:"root_volume,omitempty"` DataVolumes []DataVolume `json:"data_volumes,omitempty"` @@ -133,6 +133,8 @@ type RootVolume struct { ExtendParam *VolumeExtendParam `json:"extendparam,omitempty"` Metadata *VolumeMetadata `json:"metadata,omitempty"` + + EncryptionInfo *VolumeEncryptInfo `json:"encryption_info,omitempty"` } type DataVolume struct { diff --git a/huaweicloudstack/services/ecs/resource_hcs_compute_instance.go b/huaweicloudstack/services/ecs/resource_hcs_compute_instance.go index 1be0dfec..95afb1c0 100644 --- a/huaweicloudstack/services/ecs/resource_hcs_compute_instance.go +++ b/huaweicloudstack/services/ecs/resource_hcs_compute_instance.go @@ -200,6 +200,19 @@ func ResourceComputeInstance() *schema.Resource { Optional: true, Computed: true, }, + "kms_key_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "encrypt_cipher": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice([]string{ + "AES256-XTS", "SM4-XTS", + }, false), + }, "data_disks": { Type: schema.TypeList, Optional: true, @@ -225,12 +238,15 @@ func ResourceComputeInstance() *schema.Resource { "kms_key_id": { Type: schema.TypeString, Optional: true, - ForceNew: true, + Computed: true, }, "encrypt_cipher": { Type: schema.TypeString, Optional: true, - ForceNew: true, + Computed: true, + ValidateFunc: validation.StringInSlice([]string{ + "AES256-XTS", "SM4-XTS", + }, false), }, }, }, @@ -1339,6 +1355,11 @@ func shouldUnsubscribeEIP(d *schema.ResourceData) bool { } func resourceInstanceRootVolume(d *schema.ResourceData) cloudservers.RootVolume { + extBootType := d.Get("ext_boot_type").(string) + if extBootType != "Volume" { + log.Printf("[INFO] extBootType is: %s, no need config root valume param.", extBootType) + return cloudservers.RootVolume{} + } diskType := d.Get("system_disk_type").(string) if diskType == "" { diskType = "business_type_01" @@ -1347,6 +1368,13 @@ func resourceInstanceRootVolume(d *schema.ResourceData) cloudservers.RootVolume VolumeType: diskType, Size: d.Get("system_disk_size").(int), } + if d.Get("kms_key_id") != "" { + encryptioninfo := cloudservers.VolumeEncryptInfo{ + CmkId: d.Get("kms_key_id").(string), + Cipher: d.Get("encrypt_cipher").(string), + } + volRequest.EncryptionInfo = &encryptioninfo + } return volRequest } From 6a39a5ac40e1eb512d584101e08d5bfe01c39a78 Mon Sep 17 00:00:00 2001 From: huawei Date: Wed, 28 Aug 2024 16:57:49 +0800 Subject: [PATCH 8/8] fix(vpc): hcs_vpc_route not support route_table_id --- docs/data-sources/vpc_peering_connection.md | 58 ------------ docs/resources/vpc_peering_connection.md | 68 -------------- .../vpc_peering_connection_accepter.md | 88 ------------------- docs/resources/vpc_route.md | 28 +----- 4 files changed, 2 insertions(+), 240 deletions(-) delete mode 100644 docs/data-sources/vpc_peering_connection.md delete mode 100644 docs/resources/vpc_peering_connection.md delete mode 100644 docs/resources/vpc_peering_connection_accepter.md diff --git a/docs/data-sources/vpc_peering_connection.md b/docs/data-sources/vpc_peering_connection.md deleted file mode 100644 index 3dcce7dd..00000000 --- a/docs/data-sources/vpc_peering_connection.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -subcategory: "Virtual Private Cloud (VPC)" ---- - -# hcs_vpc_peering_connection - -The VPC Peering Connection data source provides details about a specific VPC peering connection. - -## Example Usage - -```hcl -data "hcs_vpc" "vpc" { - name = "vpc" -} - -data "hcs_vpc" "peer_vpc" { - name = "peer_vpc" -} - -data "hcs_vpc_peering_connection" "peering" { - vpc_id = data.hcs_vpc.vpc.id - peer_vpc_id = data.hcs_vpc.peer_vpc.id -} - -resource "hcs_vpc_route" "vpc_route" { - type = "peering" - nexthop = data.hcs_vpc_peering_connection.peering.id - destination = "192.168.0.0/16" - vpc_id = data.hcs_vpc.vpc.id -} -``` - -## Argument Reference - -The arguments of this data source act as filters for querying the available VPC peering connection. The given filters -must match exactly one VPC peering connection whose data will be exported as attributes. - -* `region` - (Optional, String) The region in which to obtain the VPC Peering Connection. If omitted, the provider-level - region will be used. - -* `id` - (Optional, String) The ID of the specific VPC Peering Connection to retrieve. - -* `name` - (Optional, String) The name of the specific VPC Peering Connection to retrieve. - -* `status` - (Optional, String) The status of the specific VPC Peering Connection to retrieve. - -* `vpc_id` - (Optional, String) The ID of the requester VPC of the specific VPC Peering Connection to retrieve. - -* `peer_vpc_id` - (Optional, String) The ID of the accepter/peer VPC of the specific VPC Peering Connection to retrieve. - -* `peer_tenant_id` - (Optional, String) The Tenant ID of the accepter/peer VPC of the specific VPC Peering Connection to - retrieve. - -## Attribute Reference - -In addition to all arguments above, the following attributes are exported: - -* `description` - The description of the VPC Peering Connection. diff --git a/docs/resources/vpc_peering_connection.md b/docs/resources/vpc_peering_connection.md deleted file mode 100644 index 1bd9bdf8..00000000 --- a/docs/resources/vpc_peering_connection.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -subcategory: "Virtual Private Cloud (VPC)" ---- - -# hcs_vpc_peering_connection - -Provides a resource to manage a VPC Peering Connection resource. - --> **NOTE:** For cross-tenant (requester's tenant differs from the accepter's tenant) VPC Peering Connections, - use the `hcs_vpc_peering_connection` resource to manage the requester's side of the connection and - use the `hcs_vpc_peering_connection_accepter` resource to manage the accepter's side of the connection. -
If you create a VPC peering connection with another VPC of your own, the connection is created without the need - for you to accept the connection. - -## Example Usage - - ```hcl -resource "hcs_vpc_peering_connection" "peering" { - name = var.peer_conn_name - vpc_id = var.vpc_id - peer_vpc_id = var.accepter_vpc_id -} - ``` - -## Argument Reference - -The following arguments are supported: - -* `region` - (Optional, String, ForceNew) The region in which to create the VPC peering connection. If omitted, the - provider-level region will be used. Changing this creates a new VPC peering connection resource. - -* `name` - (Required, String) Specifies the name of the VPC peering connection. The value can contain 1 to 64 - characters. - -* `vpc_id` - (Required, String, ForceNew) Specifies the ID of a VPC involved in a VPC peering connection. Changing this - creates a new VPC peering connection. - -* `peer_vpc_id` - (Required, String, ForceNew) Specifies the VPC ID of the accepter tenant. Changing this creates a new - VPC peering connection. - -* `peer_tenant_id` - (Optional, String, ForceNew) Specifies the tenant ID of the accepter tenant. Changing this creates - a new VPC peering connection. - -* `description` - (Optional, String) Specifies the description of the VPC peering connection. - -## Attribute Reference - -In addition to all arguments above, the following attributes are exported: - -* `id` - The VPC peering connection ID. - -* `status` - The VPC peering connection status. The value can be PENDING_ACCEPTANCE, REJECTED, EXPIRED, DELETED, or - ACTIVE. - -## Timeouts - -This resource provides the following timeouts configuration options: - -* `create` - Default is 10 minutes. -* `delete` - Default is 10 minutes. - -## Import - -VPC Peering resources can be imported using the `vpc peering id`, e.g. - -``` -$ terraform import hcs_vpc_peering_connection.test_connection 22b76469-08e3-4937-8c1d-7aad34892be1 -``` diff --git a/docs/resources/vpc_peering_connection_accepter.md b/docs/resources/vpc_peering_connection_accepter.md deleted file mode 100644 index 65bb33db..00000000 --- a/docs/resources/vpc_peering_connection_accepter.md +++ /dev/null @@ -1,88 +0,0 @@ ---- -subcategory: "Virtual Private Cloud (VPC)" ---- - -# hcs_vpc_peering_connection_accepter - -Provides a resource to manage the accepter's side of a VPC Peering Connection. - --> **NOTE:** When a cross-tenant (requester's tenant differs from the accepter's tenant) VPC Peering Connection - is created, a VPC Peering Connection resource is automatically created in the accepter's account. - The requester can use the `hcs_vpc_peering_connection` resource to manage its side of the connection and - the accepter can use the `hcs_vpc_peering_connection_accepter` resource to accept its side of the connection - into management. - -## Example Usage - -```hcl - -resource "hcs_vpc" "vpc_main" { - name = var.vpc_name - cidr = var.vpc_cidr -} - -resource "hcs_vpc" "vpc_peer" { - name = var.peer_vpc_name - cidr = var.peer_vpc_cidr -} - -# Requester's side of the connection. -resource "hcs_vpc_peering_connection" "peering" { - name = var.peer_name - vpc_id = hcs_vpc.vpc_main.id - peer_vpc_id = hcs_vpc.vpc_peer.id - peer_tenant_id = var.tenant_id -} - -# Accepter's side of the connection. -resource "hcs_vpc_peering_connection_accepter" "peer" { - accept = true - - vpc_peering_connection_id = hcs_vpc_peering_connection.peering.id -} - ``` - -## Argument Reference - -The following arguments are supported: - -* `region` - (Optional, String, ForceNew) The region in which to create the vpc peering connection accepter. If omitted, - the provider-level region will be used. Changing this creates a new VPC peering connection accepter resource. - -* `vpc_peering_connection_id` - (Required, String, ForceNew) The VPC Peering Connection ID to manage. Changing this - creates a new VPC peering connection accepter. - -* `accept` - (Optional, Bool) Whether or not to accept the peering request. Defaults to `false`. - -## Removing hcs_vpc_peering_connection_accepter from your configuration - -HuaweiCloud allows a cross-tenant VPC Peering Connection to be deleted from either the requester's or accepter's side. -However, Terraform only allows the VPC Peering Connection to be deleted from the requester's side by removing the -corresponding `hcs_vpc_peering_connection` resource from your configuration. -Removing a `hcs_vpc_peering_connection_accepter` resource from your configuration will remove it from your -state file and management, but will not destroy the VPC Peering Connection. - -## Attribute Reference - -In addition to all arguments above, the following attributes are exported: - -* `id` - The VPC peering connection ID. - -* `name` - The VPC peering connection name. - -* `status` - The VPC peering connection status. - -* `description` - The description of the VPC peering connection. - -* `vpc_id` - The ID of requester VPC involved in a VPC peering connection. - -* `peer_vpc_id` - The VPC ID of the accepter tenant. - -* `peer_tenant_id` - The Tenant Id of the accepter tenant. - -## Timeouts - -This resource provides the following timeouts configuration options: - -* `create` - Default is 10 minutes. -* `delete` - Default is 10 minutes. diff --git a/docs/resources/vpc_route.md b/docs/resources/vpc_route.md index ff113e95..e01b610a 100644 --- a/docs/resources/vpc_route.md +++ b/docs/resources/vpc_route.md @@ -21,25 +21,6 @@ resource "hcs_vpc_route" "vpc_route" { nexthop = var.nexthop } ``` - -### Add route to a custom route table - -```hcl -variable "vpc_id" {} -variable "nexthop" {} - -data "hcs_vpc_route_table" "rtb" { - vpc_id = var.vpc_id - name = "demo" -} - -resource "hcs_vpc_route" "vpc_route" { - vpc_id = var.vpc_id - route_table_id = data.hcs_vpc_route_table.rtb.id - destination = "172.16.8.0/24" - type = "ecs" - nexthop = var.nexthop -} ``` ## Argument Reference @@ -72,17 +53,12 @@ The following arguments are supported: * `description` - (Optional, String) Specifies the supplementary information about the route. The value is a string of no more than 255 characters and cannot contain angle brackets (< or >). -* `route_table_id` - (Optional, String, ForceNew) Specifies the route table ID for which a route is to be added. - If the value is not set, the route will be added to the *default* route table. - ## Attributes Reference In addition to all arguments above, the following attributes are exported: * `id` - The route ID, the format is `/` -* `route_table_name` - The name of route table. - ## Timeouts This resource provides the following timeouts configuration options: @@ -92,8 +68,8 @@ This resource provides the following timeouts configuration options: ## Import -VPC routes can be imported using the route table ID and their `destination` separated by a slash, e.g. +VPC routes can be imported using their `destination` separated by a slash, e.g. ``` -$ terraform import hcs_vpc_route.test / +$ terraform import hcs_vpc_route.test ```