Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow access to relevant data buckets #2951

Merged
merged 3 commits into from
Aug 19, 2023

Conversation

slesaad
Copy link
Contributor

@slesaad slesaad commented Aug 9, 2023

What's changed

Extra IAM policy added for access to relevant buckets for the Greenhouse Gas Center Hub.

@slesaad slesaad requested a review from a team as a code owner August 9, 2023 14:50
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload",
"s3:ListBucketVersions",
"s3:CreateBucket",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is CreateBucket necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, we should be fine without, thanks for pointing that out @yuvipanda !

@slesaad
Copy link
Contributor Author

slesaad commented Aug 14, 2023

@yuvipanda could you re-review this? Thanks

@jmunroe
Copy link
Contributor

jmunroe commented Aug 14, 2023

Ping @2i2c-org/engineering

@damianavila damianavila requested a review from a team August 14, 2023 22:27
@sgibson91
Copy link
Member

I tried to run tf plan so that I could check/review/merge this, but ran into issues.

I followed the guidelines here to get CLI access to the AWS project in order to make terraform changes: https://infrastructure.2i2c.org/topic/access-creds/cloud-auth/#cloud-access-aws-iam-terminal

Output of plan has difficulty reading certain resources:

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform planned the following actions, but then encountered a problem:

  # aws_iam_policy.extra_user_policy["prod"] will be created
  + resource "aws_iam_policy" "extra_user_policy" {
      + arn         = (known after apply)
      + description = "Extra permissions granted to users on hub prod on nasa-ghg-hub"
      + id          = (known after apply)
      + name        = "nasa-ghg-hub-prod-extra-user-policy"
      + name_prefix = (known after apply)
      + path        = "/"
      + policy      = jsonencode(
            {
              + Statement = [
                  + {
                      + Action   = [
                          + "s3:PutObject",
                          + "s3:GetObject",
                          + "s3:ListBucketMultipartUploads",
                          + "s3:AbortMultipartUpload",
                          + "s3:ListBucketVersions",
                          + "s3:ListBucket",
                          + "s3:DeleteObject",
                          + "s3:GetBucketLocation",
                          + "s3:ListMultipartUploadParts",
                        ]
                      + Effect   = "Allow"
                      + Resource = [
                          + "arn:aws:s3:::ghgc-data-staging",
                          + "arn:aws:s3:::ghgc-data-staging/*",
                          + "arn:aws:s3:::ghgc-data-store-dev",
                          + "arn:aws:s3:::ghgc-data-store-dev/*",
                          + "arn:aws:s3:::ghgc-data-store",
                          + "arn:aws:s3:::ghgc-data-store/*",
                          + "arn:aws:s3:::ghgc-data-store-staging",
                          + "arn:aws:s3:::ghgc-data-store-staging/*",
                          + "arn:aws:s3:::veda-data-store-staging",
                          + "arn:aws:s3:::veda-data-store-staging/*",
                          + "arn:aws:s3:::lp-prod-protected",
                          + "arn:aws:s3:::lp-prod-protected/*",
                          + "arn:aws:s3:::gesdisc-cumulus-prod-protected",
                          + "arn:aws:s3:::gesdisc-cumulus-prod-protected/*",
                          + "arn:aws:s3:::nsidc-cumulus-prod-protected",
                          + "arn:aws:s3:::nsidc-cumulus-prod-protected/*",
                          + "arn:aws:s3:::ornl-cumulus-prod-protected",
                          + "arn:aws:s3:::ornl-cumulus-prod-protected/*",
                        ]
                    },
                  + {
                      + Action   = "s3:ListAllMyBuckets"
                      + Effect   = "Allow"
                      + Resource = "*"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + policy_id   = (known after apply)
      + tags_all    = (known after apply)
    }

  # aws_iam_policy.extra_user_policy["staging"] will be created
  + resource "aws_iam_policy" "extra_user_policy" {
      + arn         = (known after apply)
      + description = "Extra permissions granted to users on hub staging on nasa-ghg-hub"
      + id          = (known after apply)
      + name        = "nasa-ghg-hub-staging-extra-user-policy"
      + name_prefix = (known after apply)
      + path        = "/"
      + policy      = jsonencode(
            {
              + Statement = [
                  + {
                      + Action   = [
                          + "s3:PutObject",
                          + "s3:GetObject",
                          + "s3:ListBucketMultipartUploads",
                          + "s3:AbortMultipartUpload",
                          + "s3:ListBucketVersions",
                          + "s3:ListBucket",
                          + "s3:DeleteObject",
                          + "s3:GetBucketLocation",
                          + "s3:ListMultipartUploadParts",
                        ]
                      + Effect   = "Allow"
                      + Resource = [
                          + "arn:aws:s3:::ghgc-data-staging",
                          + "arn:aws:s3:::ghgc-data-staging/*",
                          + "arn:aws:s3:::ghgc-data-store-dev",
                          + "arn:aws:s3:::ghgc-data-store-dev/*",
                          + "arn:aws:s3:::ghgc-data-store",
                          + "arn:aws:s3:::ghgc-data-store/*",
                          + "arn:aws:s3:::ghgc-data-store-staging",
                          + "arn:aws:s3:::ghgc-data-store-staging/*",
                          + "arn:aws:s3:::veda-data-store-staging",
                          + "arn:aws:s3:::veda-data-store-staging/*",
                          + "arn:aws:s3:::lp-prod-protected",
                          + "arn:aws:s3:::lp-prod-protected/*",
                          + "arn:aws:s3:::gesdisc-cumulus-prod-protected",
                          + "arn:aws:s3:::gesdisc-cumulus-prod-protected/*",
                          + "arn:aws:s3:::nsidc-cumulus-prod-protected",
                          + "arn:aws:s3:::nsidc-cumulus-prod-protected/*",
                          + "arn:aws:s3:::ornl-cumulus-prod-protected",
                          + "arn:aws:s3:::ornl-cumulus-prod-protected/*",
                          + "arn:aws:s3:::podaac-ops-cumulus-public",
                          + "arn:aws:s3:::podaac-ops-cumulus-public/*",
                          + "arn:aws:s3:::podaac-ops-cumulus-protected",
                          + "arn:aws:s3:::podaac-ops-cumulus-protected/*",
                        ]
                    },
                  + {
                      + Action   = "s3:ListAllMyBuckets"
                      + Effect   = "Allow"
                      + Resource = "*"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + policy_id   = (known after apply)
      + tags_all    = (known after apply)
    }

Plan: 2 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  ~ eksctl_iam_command = <<-EOT
        eksctl create iamidentitymapping \
           --cluster nasa-ghg-hub \
           --region us-west-2 \
      -    --arn arn:aws:iam::597746869805:user/hub-continuous-deployer \
      +    --arn arn:aws:iam::444055461661:user/hub-continuous-deployer \
           --username hub-continuous-deployer  \
           --group system:masters
    EOT
╷
│ Error: reading Amazon S3 (Simple Storage) Bucket (nasa-ghg-hub-scratch): Forbidden: Forbidden
│       status code: 403, request id: SS6X99744RSCE2CF, host id: 3MlFu9S2HMpIbkP1gZRtvff0oNFhZEEp1Xt/KUmJDh+Y/hSMnn4VWFg8z9cCSOQRKXv1AZHNieA=
│ 
│   with aws_s3_bucket.user_buckets["scratch"],
│   on buckets.tf line 1, in resource "aws_s3_bucket" "user_buckets":
│    1: resource "aws_s3_bucket" "user_buckets" {
│ 
╵
╷
│ Error: reading Amazon S3 (Simple Storage) Bucket (nasa-ghg-hub-scratch-staging): Forbidden: Forbidden
│       status code: 403, request id: SS6K0F15S04JZBTK, host id: 3JhyAazRRyMj23GrISXVZ3IKEV4UiGkGdUsZx8d5eSIhIiBxCq77ypuQO0pGtf+4keFLxD7ZDR4=
│ 
│   with aws_s3_bucket.user_buckets["scratch-staging"],
│   on buckets.tf line 1, in resource "aws_s3_bucket" "user_buckets":
│    1: resource "aws_s3_bucket" "user_buckets" {
│ 
╵
╷
│ Error: error getting S3 Bucket Lifecycle Configuration (nasa-ghg-hub-scratch): AccessDenied: Access Denied
│       status code: 403, request id: 1RV9997J5XM43KS1, host id: N6RQUuN74LHe6ibxaTMivZJLN7JNccZzDtPiKIE3YbILNsSESJKkctOCLUFjZvpElxEMENy+jb4=
│ 
│   with aws_s3_bucket_lifecycle_configuration.user_bucket_expiry["scratch"],
│   on buckets.tf line 7, in resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry":
│    7: resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry" {
│ 
╵
╷
│ Error: error getting S3 Bucket Lifecycle Configuration (nasa-ghg-hub-scratch-staging): AccessDenied: Access Denied
│       status code: 403, request id: 1RVD8ZKF9M97MJ2W, host id: MgWOCIJ+9SN/OAxIk9UGHeBCX4IC/Cq+wx8hd8oHyicSHGBTjNhUq+sxLEaUE6GrKLLano2S15c=
│ 
│   with aws_s3_bucket_lifecycle_configuration.user_bucket_expiry["scratch-staging"],
│   on buckets.tf line 7, in resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry":
│    7: resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry" {
│ 
╵
╷
│ Error: reading IAM Access Key (AKIAYWLD5MYWZERLF4UP): AccessDenied: User: arn:aws:iam::444055461661:user/sgibson is not authorized to perform: iam:ListAccessKeys on resource: user hub-continuous-deployer with an explicit deny in an identity-based policy
│       status code: 403, request id: ccf87511-c23a-4dbd-870e-f2c1115d9a68
│ 
│   with aws_iam_access_key.continuous_deployer,
│   on cd.tf line 6, in resource "aws_iam_access_key" "continuous_deployer":
│    6: resource "aws_iam_access_key" "continuous_deployer" {
│ 
╵
╷
│ Error: reading EFS file system (fs-09bd18b3ca23eefa2): AccessDeniedException: User: arn:aws:iam::444055461661:user/sgibson is not authorized to perform: elasticfilesystem:DescribeFileSystems on the specified resource
│       status code: 403, request id: b4f8c81f-2b1f-4650-bc0e-a4257f6f7441
│ 
│   with aws_efs_file_system.homedirs,
│   on efs.tf line 44, in resource "aws_efs_file_system" "homedirs":
│   44: resource "aws_efs_file_system" "homedirs" {
│ 
╵
╷
│ Error: reading EKS Cluster (nasa-ghg-hub): AccessDeniedException: User: arn:aws:iam::444055461661:user/sgibson is not authorized to perform: eks:DescribeCluster on resource: arn:aws:eks:us-west-2:444055461661:cluster/nasa-ghg-hub with an explicit deny
│ 
│   with data.aws_eks_cluster.cluster,
│   on main.tf line 34, in data "aws_eks_cluster" "cluster":
│   34: data "aws_eks_cluster" "cluster" {

I think the changes are fine, but I won't be able to apply them

@damianavila
Copy link
Contributor

@sgibson91
Copy link
Member

@damianavila Yes! I also wondered if it's because it's a NASA account? I remember we had to get the role for the continuous deployer exempted from some policy or other. Maybe I'm not allowed to use Access Keys without approval?

@sgibson91
Copy link
Member

I also tried running deployer use-cluster-credentials nasa-ghg before executing terraform, hoping that the continuous deployer role would have access I don't, but got the same errors.

@sgibson91
Copy link
Member

Ok, we are making progress. There is some copy-pasta in the cluster file that made me assume this was in the same account as the NASA VEDA cluster (I will open a PR to update that). Turns out, no other 2i2c engineers have accounts in the smce-ghg-center account yet, but Yuvi made me one yesterday. However, I still cannot run terraform code yet, and I do believe now it is what Damián mentioned above (I pinged Yuvi again): https://infrastructure.2i2c.org/hub-deployment-guide/new-cluster/aws/#grant-eksctl-access-to-other-users

Current terraform errors:

│ Error: reading Amazon S3 (Simple Storage) Bucket (nasa-ghg-hub-scratch): Forbidden: Forbidden
│       status code: 403, request id: GC7Q3ZA30W5JPZPV, host id: IdMTlWnF2ZAX/WFL2WlrxhgclhptfyKSx4MgRvr3KzOCUXuL4HabrjphNIae2ily9iQgNvmD9a4=
│ 
│   with aws_s3_bucket.user_buckets["scratch"],
│   on buckets.tf line 1, in resource "aws_s3_bucket" "user_buckets":
│    1: resource "aws_s3_bucket" "user_buckets" {
│ 
╵
╷
│ Error: reading Amazon S3 (Simple Storage) Bucket (nasa-ghg-hub-scratch-staging): Forbidden: Forbidden
│       status code: 403, request id: GC7Z22VVWW0EZ0QA, host id: hJTy2DCqtYJzdb1QVRXYVQH3ACrg9ekE0kkZyBHrIthurqhLOcleXIt04c6cF492Rlje5h0gM18=
│ 
│   with aws_s3_bucket.user_buckets["scratch-staging"],
│   on buckets.tf line 1, in resource "aws_s3_bucket" "user_buckets":
│    1: resource "aws_s3_bucket" "user_buckets" {
│ 
╵
╷
│ Error: error getting S3 Bucket Lifecycle Configuration (nasa-ghg-hub-scratch): AccessDenied: Access Denied
│       status code: 403, request id: R4DFNHW7JWTH2A9A, host id: CemlMjlzfTdbK9XoD0tkj9Bzd+kmslkiVBqoo1WKyl/mdwHbLr/lzoBowraq6p61M5NLT+n0RUo=
│ 
│   with aws_s3_bucket_lifecycle_configuration.user_bucket_expiry["scratch"],
│   on buckets.tf line 7, in resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry":
│    7: resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry" {
│ 
╵
╷
│ Error: error getting S3 Bucket Lifecycle Configuration (nasa-ghg-hub-scratch-staging): AccessDenied: Access Denied
│       status code: 403, request id: R4DC74KCBT6GC9WF, host id: UEoLm7rgCop6QAfszRoqz4nzua0k+q0uen6venFilrMtLbbLvMVBptW+SbRgm4FUB/U2E96XTPw=
│ 
│   with aws_s3_bucket_lifecycle_configuration.user_bucket_expiry["scratch-staging"],
│   on buckets.tf line 7, in resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry":
│    7: resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry" {
│ 
╵
╷
│ Error: reading IAM Access Key (AKIAYWLD5MYWZERLF4UP): AccessDenied: User: arn:aws:iam::597746869805:user/sgibson91 is not authorized to perform: iam:ListAccessKeys on resource: user hub-continuous-deployer with an explicit deny in an identity-based policy
│       status code: 403, request id: 148d2ff9-b2ef-4bbf-bae6-26b468d51e81
│ 
│   with aws_iam_access_key.continuous_deployer,
│   on cd.tf line 6, in resource "aws_iam_access_key" "continuous_deployer":
│    6: resource "aws_iam_access_key" "continuous_deployer" {
│ 
╵
╷
│ Error: reading EFS file system (fs-09bd18b3ca23eefa2): AccessDeniedException: User: arn:aws:iam::597746869805:user/sgibson91 is not authorized to perform: elasticfilesystem:DescribeFileSystems on the specified resource
│       status code: 403, request id: e2d1ddce-7849-4319-bdb1-f39c9c2f9bba
│ 
│   with aws_efs_file_system.homedirs,
│   on efs.tf line 44, in resource "aws_efs_file_system" "homedirs":
│   44: resource "aws_efs_file_system" "homedirs" {
│ 
╵
╷
│ Error: reading EKS Cluster (nasa-ghg-hub): AccessDeniedException: User: arn:aws:iam::597746869805:user/sgibson91 is not authorized to perform: eks:DescribeCluster on resource: arn:aws:eks:us-west-2:597746869805:cluster/nasa-ghg-hub with an explicit deny
│ 
│   with data.aws_eks_cluster.cluster,
│   on main.tf line 34, in data "aws_eks_cluster" "cluster":
│   34: data "aws_eks_cluster" "cluster" {

@slesaad
Copy link
Contributor Author

slesaad commented Aug 17, 2023

Thanks for working on this! @sgibson91. 🙇 When do you think this will be done?

@sgibson91
Copy link
Member

@slesaad I need Yuvi to grant me the correct permissions on the cluster to be able to execute the terraform apply. Unfortunately, he is currently out sick.

@slesaad
Copy link
Contributor Author

slesaad commented Aug 17, 2023

ah okay, hope he feels better soon.

@sgibson91
Copy link
Member

sgibson91 commented Aug 18, 2023

Ok, so turns out because MFA is enforced on the account, I also have to MFA-authenticate my CLI following these docs: https://repost.aws/knowledge-center/authenticate-mfa-cli I used the named profile option and used the Access Key ID, Secret Access Key, and Session Token returned from the sts get-session-token command.

So I did that, but I still get an error, but a different one:

$ tf plan -var-file=projects/nasa-ghg.tfvars -out=nasa-ghg

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: configuring Terraform AWS Provider: validating provider credentials: retrieving caller identity from STS: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: b81a187d-44cd-4b94-9555-03d2052f3575, api error InvalidClientTokenId: The security token included in the request is invalid.
│ 
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on main.tf line 30, in provider "aws":
│   30: provider "aws" {
│ 
╵

@sgibson91
Copy link
Member

sgibson91 commented Aug 18, 2023

The session token I was given had backslashes in it that needed escaping. But I am back to

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform planned the following actions, but then encountered a problem:

  # aws_iam_user.continuous_deployer will be updated in-place
  ~ resource "aws_iam_user" "continuous_deployer" {
        id            = "hub-continuous-deployer"
        name          = "hub-continuous-deployer"
      ~ tags          = {
          - "creator" = "yuvipanda" -> null
        }
      ~ tags_all      = {
          - "creator" = "yuvipanda"
        } -> (known after apply)
        # (4 unchanged attributes hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.
╷
│ Error: reading Amazon S3 (Simple Storage) Bucket (nasa-ghg-hub-scratch): Forbidden: Forbidden
│       status code: 403, request id: DB0CXZJFGC5MW0EJ, host id: ARsqBYTVF3xGThQF5qcGVkgVObJBD0e/rrVWvP0btW3occJS6Ql+cCtsSdXt26gKiqWOkusRn8E=
│ 
│   with aws_s3_bucket.user_buckets["scratch"],
│   on buckets.tf line 1, in resource "aws_s3_bucket" "user_buckets":
│    1: resource "aws_s3_bucket" "user_buckets" {
│ 
╵
╷
│ Error: reading Amazon S3 (Simple Storage) Bucket (nasa-ghg-hub-scratch-staging): Forbidden: Forbidden
│       status code: 403, request id: DB00FYZRE0X75QX6, host id: rM8VWGgzFfHVFbFdlc/FvIhV+L1Q1zxBxlTW0QCGgByh655XG9clIByh/0ibb9kMBRBpnwBKG2M=
│ 
│   with aws_s3_bucket.user_buckets["scratch-staging"],
│   on buckets.tf line 1, in resource "aws_s3_bucket" "user_buckets":
│    1: resource "aws_s3_bucket" "user_buckets" {
│ 
╵
╷
│ Error: error getting S3 Bucket Lifecycle Configuration (nasa-ghg-hub-scratch): AccessDenied: Access Denied
│       status code: 403, request id: 8FXJ2HKS0Q4FTQJV, host id: lUcdJd9k8DAVgiTcNIJJiGvfZyvD9l4IoFzZ7rAl7WfSJ7QW5uevRqiYw8jHpShHmZnkGumrlsU=
│ 
│   with aws_s3_bucket_lifecycle_configuration.user_bucket_expiry["scratch"],
│   on buckets.tf line 7, in resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry":
│    7: resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry" {
│ 
╵
╷
│ Error: error getting S3 Bucket Lifecycle Configuration (nasa-ghg-hub-scratch-staging): AccessDenied: Access Denied
│       status code: 403, request id: 8FXS95838DH97WB7, host id: 5QxqRCDH5JM42MVpj/tnzr+0JzEXdyvBcDqoQXy4LrDnhoc/8+/gkEcGIn7V0qV3TbPMYWZR2QA=
│ 
│   with aws_s3_bucket_lifecycle_configuration.user_bucket_expiry["scratch-staging"],
│   on buckets.tf line 7, in resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry":
│    7: resource "aws_s3_bucket_lifecycle_configuration" "user_bucket_expiry" {
│ 
╵
╷
│ Error: reading IAM Access Key (AKIAYWLD5MYWZERLF4UP): AccessDenied: User: arn:aws:iam::597746869805:user/sgibson91 is not authorized to perform: iam:ListAccessKeys on resource: user hub-continuous-deployer with an explicit deny in an identity-based policy
│       status code: 403, request id: 00820580-4024-4703-ba34-2eb96ffeeac4
│ 
│   with aws_iam_access_key.continuous_deployer,
│   on cd.tf line 6, in resource "aws_iam_access_key" "continuous_deployer":
│    6: resource "aws_iam_access_key" "continuous_deployer" {
│ 
╵
╷
│ Error: reading EFS file system (fs-09bd18b3ca23eefa2): AccessDeniedException: User: arn:aws:iam::597746869805:user/sgibson91 is not authorized to perform: elasticfilesystem:DescribeFileSystems on the specified resource
│       status code: 403, request id: c7e93696-02f1-43bb-ba65-0a39e97b6605
│ 
│   with aws_efs_file_system.homedirs,
│   on efs.tf line 44, in resource "aws_efs_file_system" "homedirs":
│   44: resource "aws_efs_file_system" "homedirs" {
│ 
╵
╷
│ Error: reading EKS Cluster (nasa-ghg-hub): AccessDeniedException: User: arn:aws:iam::597746869805:user/sgibson91 is not authorized to perform: eks:DescribeCluster on resource: arn:aws:eks:us-west-2:597746869805:cluster/nasa-ghg-hub with an explicit deny
│ 
│   with data.aws_eks_cluster.cluster,
│   on main.tf line 34, in data "aws_eks_cluster" "cluster":
│   34: data "aws_eks_cluster" "cluster" {

[edited to add: the actual policy update is missing from this plan because I switched to a new branch to update our AWS auth docs and didn't switch back when trying to do terraform again]

@yuvipanda
Copy link
Member

Apologies for all the issues caused here - I got waylaid hard by covid just as I was finishing this up and hadn't given anyone else access :( After that I think our interaction with 2FA on AWS wasn't documented enough....

I've updated @sgibson91's #2998 to include both better documentation about using AWS MFA from the CLI, as well as a convenience command to set it up correctly easilyy - aws is behind gcloud in user experience here.

With that, I was able to run the following commands and apply this:

$ deployer exec-aws-shell ghg arn:aws:iam::597746869805:mfa/phone <code>
$ cd terraform/aws,
$ terraform workspace select nasa-ghg
$ terraform apply -var-file projects/nasa-ghg.tfvars

This applied cleanly, and am going to merge this now to reflect that.

@slesaad try this out and see if this works?

Again, my immune system apologises to everyone for the issues caused.

@yuvipanda yuvipanda merged commit 2f49660 into 2i2c-org:master Aug 19, 2023
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Status: Done 🎉
Development

Successfully merging this pull request may close these issues.

5 participants