Skip to content

Commit

Permalink
Merge pull request #136 from YikaiHu/main
Browse files Browse the repository at this point in the history
Update to version v2.6.0
  • Loading branch information
YikaiHu authored Jan 18, 2024
2 parents 2fdfc6b + 3fc6e81 commit 7d4cbe3
Show file tree
Hide file tree
Showing 93 changed files with 2,829 additions and 156 deletions.
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,20 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [2.6.0] - 2024-01-18

### Added

- Implemented server-side encryption options for writing objects into the Amazon S3 destination bucket: 'AES256' for AES256 encryption, 'AWS_KMS' for AWS Key Management Service encryption, and 'None' for no encryption. #124
- Provided the optional Amazon S3 bucket to hold prefix list file. #125, #97

### Changed

- Expanded Finder memory options, now including increased capacities of 316GB & 512GB.
- Added the feature of deleting KMS Key automatically after the solution pipeline status turns to stopped. #135
- Added the feature that Finder Instance enables DTH-CLI automatically after external reboot.
- Add documentation of how to deploy the S3/ECR transfer task using CloudFormation. #128

## [2.5.0] - 2023-09-15

### Added
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ information to effectively respond to your bug report or contribution.

We welcome you to use the GitHub issue tracker to report bugs or suggest features.

When filing an issue, please check [existing open](https://github.com/awslabs/data-transfer-hub/issues), or [recently closed](https://github.com/awslabs/data-transfer-hub/issues?q=is%3Aissue+is%3Aclosed), issues to make sure somebody else hasn't already
When filing an issue, please check [existing open](https://github.com/awslabs/data-transfer-hub/issues), or [recently closed](https://github.com/awslabs/data-transfer-hub/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already
reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:

* A reproducible test case or series of steps
Expand Down
22 changes: 16 additions & 6 deletions docs/USING_PREFIX_LIST.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,25 @@ Please write the list of prefixes into a Plain Text format file, with one prefix
For example:
![Prefix List File](images/prefix_list_file.png)

## Step 2: Upload the Prefix List File to the source data bucket
## Step 2: Uploading the Prefix List File to Your Bucket
> **Note**: Ensure you enter the precise path of the Prefix List File when specifying its location in Step 3.
You can put the prefix list file in anywhere in your source bucket.
> Note: Please remember to write its actual path when filling in the location of the Prefix List File in the Step 3.
### Option 1: Uploading the Prefix List File to Your Source Bucket

You can store the prefix list file anywhere within your source bucket.
![prefix_list_file_in_s3](images/prefix_list_file_in_s3.png)

## Step 3: Config the Cloudformation Stack template
### Option 2: Uploading the Prefix List File to a Third Bucket within the Same Region and Account as the Data Transfer Hub

Write the path of the Prefix List File into the input box.
You have the flexibility to place the prefix list file in any location within a third bucket. It is essential that this third bucket shares the same region and account as the Data Transfer Hub.
![prefix_list_file_in_third_s3](images/prefix_list_third_s3.png)

![cloudformaiton](images/cloudformation_prefix_list.png)
For those using the Data Transfer Hub portal, simply click the provided link to navigate directly to the third bucket.
![prefix_list_file_from_portal](images/prefix_list_portal.png)

## Step 3: Configuring the CloudFormation Stack Template

Enter the path of the Prefix List File in the provided input field.
If your Prefix List File is located in the Source Bucket, leave the `Bucket Name for Source Prefix List File` parameter blank.

![cloudformation](images/cloudformation_prefix_list.png)
32 changes: 21 additions & 11 deletions docs/USING_PREFIX_LIST_CN.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,33 @@
[English](./USING_PREFIX_LIST_EN.md)

# 使用前缀列表完成多个指定前缀中数据的传输
# 使用前缀列表文件过滤数据传输任务

## Step 1: 创建前缀列表
## 第1步:创建前缀列表文件

请将前缀列表写入纯文本格式文件,每行一个前缀。
请将前缀列表以纯文本格式写入文件,每行一个前缀。

示例如下:
![Prefix List File](images/prefix_list_file.png)
例如:
![前缀列表文件](images/prefix_list_file.png)

## Step 2: 上传前缀列表文件到源数据桶
## 第2步:将前缀列表文件上传到您的存储桶
> **注意**:在第3步指定位置时,请确保输入前缀列表文件的精确路径。
您可以将前缀列表文件放在源存储桶中的任何位置。
> 注意: 请记住在步骤3填写Prefix List File的位置时填入它的实际路径。
### 选项1:将前缀列表文件上传到您的源存储桶

您可以在源存储桶内的任何位置存储前缀列表文件。
![prefix_list_file_in_s3](images/prefix_list_file_in_s3.png)

## Step 3: 配置 Cloudformation 的堆栈模板
### 选项2:将前缀列表文件上传到与数据传输中心同一区域和账户的第三个存储桶

将Prefix List File的路径写入堆栈模板的指定参数中。
您可以在第三个存储桶的任何位置放置前缀列表文件。重要的是,这个第三个存储桶必须与Data Transfer Hub处于同一区域和账户。
![prefix_list_file_in_third_s3](images/prefix_list_third_s3.png)

![cloudformaiton](images/cloudformation_prefix_list.png)
对于使用 Data Transfer Hub 控制台的用户,只需点击提供的链接即可直接导航至第三个存储桶。
![prefix_list_file_from_portal](images/prefix_list_portal.png)

## 第3步:配置CloudFormation堆栈模板

在提供的输入框中输入前缀列表文件的路径。
如果您的前缀列表文件位于源存储桶中,请将`Bucket Name for Source Prefix List File`参数留空。

![cloudformation](images/cloudformation_prefix_list.png)
2 changes: 1 addition & 1 deletion docs/en-base/deployment/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ In AWS Regions where Amazon Cognito is not yet available, you can use OIDC to pr
7. Save the `App ID` (that is, `client_id`) and `Issuer` to a text file from Endpoint Information, which will be used later.
[![](../images/OIDC/endpoint-info.png)](../images/OIDC/endpoint-info.png)

8. Update the `Login Callback URL` and `Logout Callback URL` to your IPC recorded domain name.
8. Update the `Login Callback URL` and `Logout Callback URL` to your ICP recorded domain name.
[![](../images/OIDC/authentication-configuration.png)](../images/OIDC/authentication-configuration.png)

9. Set the Authorization Configuration.
Expand Down
2 changes: 1 addition & 1 deletion docs/en-base/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Not supported currently. For this scenario, we recommend using Amazon S3's [Cros

**7. Can I use AWS CLI to create a DTH S3 Transfer Task?**</br>

Yes. Please refer to the tutorial [Using AWS CLI to launch DTH S3 Transfer task](../user-guide/tutorial-cli-launch).
Yes. Please refer to the tutorial [Using AWS CLI to launch DTH S3 Transfer task](../user-guide/tutorial-s3#using-aws-cli).

## Performance

Expand Down
Binary file added docs/en-base/images/cluster_cn.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en-base/images/cluster_en.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en-base/images/launch-stack copy.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/en-base/images/launch-stack.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en-base/images/secret_cn.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en-base/images/secret_en.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/en-base/images/user.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/en-base/index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
The Data Transfer Hub solution provides secure, scalable, and trackable data transfer for Amazon Simple Storage Service (Amazon S3) objects and Amazon Elastic Container Registry (Amazon ECR) images. This data transfer helps customers expand their businesses globally by easily moving data in and out of Amazon Web Services (AWS) China Regions.
The Data Transfer Hub solution provides secure, scalable, and trackable data transfer for Amazon Simple Storage Service (Amazon S3) objects and Amazon Elastic Container Registry (Amazon ECR) images. This data transfer helps customers easily create and manage different types (Amazon S3 object and Amazon ECR image) of transfer tasks between AWS [partitions](https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/partitions.html) (for example, aws, aws-cn, aws-us-gov), and from other cloud providers to AWS.

This implementation guide provides an overview of the Data Transfer Hub solution, its reference architecture and components, considerations for planning the deployment, configuration steps for deploying the Data Transfer Hub solution to the AWS Cloud.

Expand Down
3 changes: 3 additions & 0 deletions docs/en-base/plan-deployment/regions.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,15 @@ This solution uses services which may not be currently available in all AWS Regi
| Asia Pacific (Seoul) | ap-northeast-2|
| Asia Pacific (Singapore) | ap-southeast-1|
| Asia Pacific (Sydney) | ap-southeast-2|
| Asia Pacific (Melbourne) | ap-southeast-4|
| Canada (Central) | ca-central-1|
| Canada (Calgary) | ca-west-1|
| Europe (Ireland) | eu-west-1|
| Europe (London) | eu-west-2|
| Europe (Stockholm) | eu-north-1|
| Europe (Frankfurt) | eu-central-1|
| South America (São Paulo) | sa-east-1|
| Israel (Tel Aviv) | il-central-1|

## Supported regions for deployment in AWS China Regions

Expand Down
5 changes: 3 additions & 2 deletions docs/en-base/revisions.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
| Date | Description|
|----------|--------|
|----------------|--------|
| January 2021 | Initial release of version 1.0 |
| July 2021 | Released version 2.0 <br> 1. Support general OIDC providers, including Authing, Auth0, okta, etc.<br> 2. Support transferring objects from more Amazon S3 compatible storage services, such as Huawei Cloud OBS.<br> 3. Support setting the access control list (ACL) of the target bucket object<br> 4. Support deployment in account A, and copying data from account B to account C<br> 5. Change to use Graviton 2 instance, and turn on BBR to transfer S3 objects to improve performance and save costs<br> 6. Change to use Secrets Manager to maintain credential information |
| December 2021 | Released version 2.1 <br> 1. Support custom prefix list to filter transfer tasks<br> 2. Support configuration of single-run file transfer tasks<br> 3. Support configuration of tasks through custom CRON Expression timetable<br> 4. Support manual enabling or disabling of data comparison function |
| July 2022 | Released version 2.2 <br> 1. Support transfer data through Direct Connect|
| March 2023 | Released version 2.3 <br> 1. Support embedded dashboard and logs <br> 2. Support S3 Access Key Rotation <br> 3. Enhance One Time Transfer Task monitoring|
| April 2023 | Released version 2.4 <br> 1. Support payer request S3 object transfer|
| September 2023 | Released version 2.5 <br> 1. Added support for transferring ECR assets without tags <br> 2. Optimize stop task operation, add new filter condition to view all history tasks <br> 3. Enhanced transfer performance by utilizing cluster capabilities through parallel multipart upload for large file transfers <br> 4.Added automatic restart functionality for the Worker CLI <br> 5.Enabled IMDSv2 by default for Auto Scaling Groups |
| September 2023 | Released version 2.5 <br> 1. Added support for transferring ECR assets without tags <br> 2. Optimize stop task operation, add new filter condition to view all history tasks <br> 3. Enhanced transfer performance by utilizing cluster capabilities through parallel multipart upload for large file transfers <br> 4.Added automatic restart functionality for the Worker CLI <br> 5.Enabled IMDSv2 by default for Auto Scaling Groups |
| January 2024 | Released version 2.6 <br> 1. Added support for Amazon S3 destination bucket being encrypted with Amazon S3 managed keys <br> 2. Provided the optional Amazon S3 bucket to hold prefix list file <br> 3. Added the feature of deleting KMS Key automatically after the solution pipeline status turns to stopped <br> 4. Added the feature that Finder Instance enables DTH-CLI automatically after external reboot <br> 5. Increased Finder capacity to 316GB&512GB <br> 6. Added three supported Regions: Asia Pacific (Melbourne), Canada (Calgary), Israel (Tel Aviv) |
13 changes: 7 additions & 6 deletions docs/en-base/solution-overview/features-and-benefits.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
The solution’s web console provides an interface for managing the following tasks:
The solution supports the following key features:

- **Inter-Partition and Cross-Cloud Data Transfer**: to promote seamless transfer capabilities in one place
- **Auto scaling**: to allow rapid response to changes in file transfer traffic
- **High performance of large file transfer (1TB)**: to leverage the strengths of clustering, parallel large file slicing, and automatic retries to robust file transfer
- **Monitoring**: to track data flow, diagnose issues, and ensure the overall health of the data transfer processes
- **Out-of-the-box deployment**

- Transferring Amazon S3 objects between AWS China Regions and AWS Regions
- Transferring data from other cloud providers’ object storage services (including Alibaba Cloud OSS, Tencent COS, and Qiniu Kodo) to Amazon S3
- Transferring objects from Amazon S3 compatible object storage service to Amazon S3
- Transferring Amazon ECR images between AWS China Regions and AWS Regions
- Transferring container images from public container registries (for example, Docker Hub, Google gcr.io, Red Hat Quay.io) to Amazon ECR

!!! note "Note"

Expand Down
79 changes: 79 additions & 0 deletions docs/en-base/tutorial/IAM-Policy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@

# Set up credentials for Amazon S3

## Step 1: Create an IAM policy

1. Open AWS Management Console.

2. Choose IAM > Policy, and choose **Create Policy**.

3. Create a policy. You can follow the example below to use IAM policy statement with minimum permissions, and change the `<your-bucket-name>` in the policy statement accordingly.

!!! Note "Note"
For S3 buckets in AWS China Regions, make sure you also change to use `arn:aws-cn:s3:::` instead of `arn:aws:s3:::`.

### Policy for source bucket

```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "dth",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource":[
"arn:aws:s3:::<your-bucket-name>/*",
"arn:aws:s3:::<your-bucket-name>"
]
}
]
}
```


### Policy for destination bucket

```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "dth",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListBucket",
"s3:PutObjectAcl",
"s3:AbortMultipartUpload",
"s3:ListBucketMultipartUploads",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::<your-bucket-name>/*",
"arn:aws:s3:::<your-bucket-name>"
]
}
]
}
```

To enable S3 Delete Event, you need to add `"s3:DeleteObject"` permission to the policy.

Data Transfer Hub has native support for the S3 source bucket which enabled SSE-S3 and SSE-KMS. If your source bucket enabled *SSE-CMK*, please replace the source bucket policy with the policy in the link [for S3 SSE-KMS](./S3-SSE-KMS-Policy.md).

## Step 2: Create a user

1. Open AWS Management Console.
1. Choose IAM > User, and choose **Add User** to follow the wizard to create a user with credential.
1. Specify a user name, for example, *dth-user*.
1. For Access Type, select **Programmatic access** only and choose **Next: Permissions**.
1. Select **Attach existing policies directly**, search and use the policy created in Step 1, and choose **Next: Tags**.
1. Add tags if needed, and choose **Next: Review**.
1. Review the user details, and choose **Create User**.
1. Make sure you copied/saved the credential, and then choose **Close**.

![Create User](../images/user.png)
Loading

0 comments on commit 7d4cbe3

Please sign in to comment.