-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
data.archive_file does not generate archive file during apply #39
Comments
Based on your like on #3 I assume this is for the case where plan is executed and outputting a plan which is then applied from a clean environment. We've also experienced this in a CI environment where plan and apply are separate stages and I can also simulate the issue with this code:
and then running something along the lines of
|
Same issue in my case |
still same issue. |
Same issue here. |
Seeing the same thing 0.12.17: when I change a file in the directory referenced below,
I'm running plan via |
I'm adding an additional workaround below. If you don't know which files will change, I suggest something along the following: data "external" "hash" {
program = ["bash", "${path.module}/scripts/shasum.sh", "${path.module}/configs", "${timestamp()}"]
}
data "archive_file" "main" {
type = "zip"
output_path = pathexpand("archive-${data.external.hash.result.shasum}.zip")
source_dir = pathexpand("${path.module}/configs")
}
output "archive_file_path" {
value = data.archive_file.main.output_path
} where The content of the
|
yea, a while after I posted my last comment, I came up with something like locals {
source_dir = "${path.module}/cookbook-archive"
}
resource "random_uuid" "this" {
keepers = {
for filename in fileset(local.source_dir, "**/*"):
filename => filemd5("${local.source_dir}/${filename}")
}
}
data "archive_file" "cookbook" {
# threw the `/temp/` in there to gitignore it easier, but in hindsight it
# could be just as easy to gitignore `cookbook*.zip`
output_path = "${path.module}/temp/cookbook-${random_uuid.this.result}.zip"
source_dir = local.source_dir
type = "zip"
}
resource "aws_s3_bucket_object" "cookbook" {
bucket = module.cookbook.bucket_name
key = "cookbook.zip"
source = data.archive_file.cookbook.output_path
tags = {
ManagedBy = "Terraform"
}
} (did this from memory, so it might not quite work as-is, but it should be close) |
This is much better, thanks ! Maybe update your code so that it's valid (need a ',' line 3, and |
good catch, thanks! also dried it up a bit :) |
Oops, just run into a weird thing with this code (seems like a provider error):
|
weird. I haven't run into that, but I've also only made one change, so maybe it'll bite me next time. Maybe try one of the other file hash methods? could be something weird about md5 on one of the systems involved? |
Ah, it's because I'm dynamically adding a file (generated by TF) to my source directory, using the |
what if you added it explicitly somehow? something like: resource "random_uuid" "this" {
keepers = {
localfile => md5(local_file.main.content)
for filename in fileset(local.source_dir, "**/*"):
filename => filemd5("${local.source_dir}/${filename}")
}
} no idea if for loops work like that... :) |
The tricks work indeed, but then, each time a new apply is made, the archive and all resources that depend on it (e.g. a lambda function) will be modified, even if the content of the lambda did not change. |
I ran into this on Terraform Cloud, also. It would be ideal if we could persist a single directory between the plan and apply phases (or, if |
If I create the initial zip file manually myself then the archive_file behavior on subsequent apply runs works fine for me -- using terraform version 0.12.28 |
This comment helped me solve my issue. I'm using terraform in a Gitlab CI pipeline with separate plan and apply stages. My apply stage would fail because the archive file was not found. What's happening (and the comment above helped me understand) is the plan step is where the archive file is actually created. To make this work in my CI pipeline, I added config to cache the files created by the plan stage and make them available to the apply stage. I'd recommend changing the archive provider to produce the zip file during apply instead of plan. This would match with how I think about Terraform working. At a minimum, the docs for the archive provider should be updated to make it clear when Terraform creates the archive file. |
how the hell did they manage to mess up a goddamn zip command |
this solution worked for me adding source code hash hashicorp/terraform#8344 (comment) |
Having the exact same issue in our Gitlab CI pipeline. EDIT: According to this bit of documentation, you can defer the creation of the archive file until some resource is applied (ie. in the terraform apply step). One can imagine something like this, which also works as a workaround: data "archive_file" "zip" {
type = "zip"
source_file = "${path.module}/textfile.txt"
output_path = "${path.module}/myfile.zip"
depends_on = [
random_string.r
]
}
resource "random_string" "r" {
length = 16
special = false
} or something like this, which has an equivalent dependency graph: data "archive_file" "zip" {
type = "zip"
source_file = "${path.module}/textfile.txt"
output_path = "${path.module}/myfile-${random_string.r.result}.zip"
}
resource "random_string" "r" {
length = 16
special = false
} |
I just ran into this issue in Gitlab as well |
I managed to tweak @amine250 's solution to get it working. The downside of this approach is that even when the underlying files haven't changed, it will trigger and update. In my case this works out nicely as I'm using this to deploy a Cloud Function (GCP), which will not redeploy when there are no changes (the zipfile I upload to Cloud Storage has a hash in its name). Note that using a the null-resource directly on the archive resource and triggering the null resource with a hash of the 2 file contents does not work.
|
I also run into the same issue in Gitlab, and the resource.random_string did not work, but resource.null_resource work. Thanks! |
Is there a follow up on this? I was here one year ago, this behaviour still occurs |
Wanted to leave a warning for anyone considering the suggestion:
I tested this out and it does not work. It simply unbreaks the apply by putting an old version of the zip file there. test.tf:
Taking an old copy of the zip file with sha , and running the following shows that we end up with the old version of the zip file present during apply.
|
Knowing this helped solve my pipeline problem where I would also plan, then apply in separate gitlab pipeline stages. So the apply would attempt to upload the lambda zip files, which were generated in the plan stage and it would fail. So just adding in the plan stage, the zip folder to the artifacts of the stage, meant it was fixed and working in the apply stage I don't know why the planning stage is being used to generate zip files, planning should just be about making the plan file, applying should be about creating things and doing actions. It seems wrong to do it in the plan stage. As other people have commented |
Got hit by that problem, and I also solved it using #39 (comment) Works well (but it would be nice if the Terraform doc contained more borderline examples like this... ) |
also did the trick for me |
The For instance, for Gitlab CI: image:
name: hashicorp/terraform:1.1.9
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
variables:
PLAN: "plan.tfplan"
TF_IN_AUTOMATION: "true"
.terraform_before_script:
- terraform --version
# Ensure directory for lambda function zip files exists
- install -d lambda_output
- terraform init -input=false
stages:
- plan
- deploy
plan:
stage: plan
before_script: !reference [.terraform_before_script]
script:
- terraform plan -out=$PLAN -input=false
artifacts:
name: plan
paths:
- $PLAN
- lambda_output
deploy:
stage: deploy
before_script: !reference [.terraform_before_script]
script:
- terraform apply -input=false $PLAN
dependencies:
- plan Then, in your Terraform file: data "archive_file" "function" {
type = "zip"
source_dir = "${path.root}/lambda/function"
output_path = "${path.root}/lambda_output/function.zip"
} |
FYI, it's not recommended to store plan files as artifacts because it might contain sensitive data and is not encrypted. |
Got hit by same problem. I wonder why there is still no proper solution from archive provider :( |
Same issue on terraform cloud. Workaround with consistent data "archive_file" "scenario_zip" {
type = "zip"
output_path = "/tmp/${filesha1(var.inputfile)}.zip"
source {
content = file(var.inputfile)
filename = "myinputfile"
}
source {
# Forces a datasource refresh
content = timestamp()
filename = ".timestamp"
}
} |
The fundamental issue here is that the archive data source has side effects (i.e., creates a .zip).
When terraform plan -out=tfplan is executed, the This is expected behaviour for Terraform, again the issue is the fact that the archive data source has side effects. Currently, the workarounds described which have implicit or explicit dependencies on a managed resource are the only way to try and force execution during terraform apply rather than terraform plan. |
An even better solution is to use data "archive_file" "zip" {
type = "zip"
source_file = "${path.module}/textfile.txt"
output_path = "${path.module}/myfile-${timestamp()}.zip"
} This will force terraform to create the zip during the apply phase, and doesn't need any extra providers |
I have the same problem, but this shouldn't be marked resolved with a timestamp forcing zips and lambda layers to get versioned up all the time its wasteful and slows down CI. The hash of the zip or intended contents should determine if dependencies are retriggered and currently they aren't. |
This does the trick for me in combination with locals (to reuse the path of the archive down the line). It creates a new archive only if the underlying source file has changed. Notice the locals {
lambda_api_function_name = "api"
lambda_api_binary_path = "${path.cwd}/../build/${local.lambda_api_function_name}"
lambda_api_archive_path = "${path.module}/tf_generated/${local.lambda_api_function_name}-${filemd5(local.lambda_api_binary_path)}.zip"
}
data "archive_file" "lambda_api_zip" {
type = "zip"
source_file = local.lambda_api_binary_path
output_path = local.lambda_api_archive_path
} |
currently on Terraform v1.6.6 and it has happened twice this week. It's driving me insane. |
facing the same issue |
@bendbennett forgive me, but I find your response unsatisfactory.
If there's a more idiomatic way to do this, please tell us. What is hashicorp's recommended approach to creating a zip from a file in source code? |
Same issue here |
This works fine for me. Just followed this SO thread: |
Issue still present. My plan and apply stages run separately in Gitlab CICD pipelines, so for me the fix was caching *.zip in my pipeline confiig so the files were passed from one stage to another |
Also facing this issue. Why is this closed? |
One point of confusion that I have is the difference between the |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Hi there,
looks like data.archive_file does not generate archive file during apply.
Terraform Version
Terraform version: 0.11.11
Affected Resource(s)
Terraform Configuration Files
Expected Behavior
Archive file is generated during terraform apply.
Actual Behavior
Archive file is not generated. However if I run terraform plan before apply, the output is generated.
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform apply
The text was updated successfully, but these errors were encountered: