-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support and array of compression formats #29
base: main
Are you sure you want to change the base?
Conversation
The archiver library does check file headers, at least for some formats: https://github.com/mholt/archiver/blob/e4ef56d48eb029648b0e895bb0b6a393ef0829c3/tar.go#L24 |
Thanks for working on this, @joestump! I feel a little conflicted about this, to be honest. The AWS Lambda eventually added support for passing environment variables, which then gave us the missing feature we needed that is comparable to an Unfortunately, in the mean time some users had begun using We have precedent for Terraform provider features that allow Terraform to step slightly into "build phase" use-cases even though we don't consider that in Terraform's core scope, such as Ordinarily I'd be happy to retain the Given that this data source is already in a rather troublesome state, I have reservations about expanding its scope to any additional use-cases, particularly if that would involve making a breaking change to its interface. One way to address this would be to export the archive data in a base64-encoded attribute as discussed in #27, but that would be unsuitable for any archives of any non-trivial size since it would cause the entire file to be loaded into memory. That would also require switching back to having an explicit So with that all said, I find myself leaning towards restricting current scope of this data source just to compatibility with this emergent AWS Lambda use-case, even though we don't recommend it. We may even eventually explicitly deprecate it, though that's not a decision to be made lightly today. With no other changes, I assume this would require Heroku slugs to be built and uploaded to Heroku by a prior build step, and the id passed in to the Terraform configuration for provisioning. That would be the approach we'd generally recommend, but if you think an analog to resource "heroku_slug" "example" {
source_dir = "${path.module}/heroku_slug"
}
resource "heroku_app_release" "foobar-release" {
app = "${heroku_app.example.name}"
slug_id = "${heroku_slug.example.id}"
} Since this would be specialized to the Heroku provider, it can abstract away the details of exactly what file format is required, which feels to me like a smoother user-experience anyway. (If we do eventually decide to deprecate |
Thank you for your submission! We require that all contributors sign our Contributor License Agreement ("CLA") before we can accept the contribution. Read and sign the agreement Learn more about why HashiCorp requires a CLA and what the CLA includes Joe Stump seems not to be a GitHub user. Have you signed the CLA already but the status is still pending? Recheck it. |
Just adding another usecase for this. We want to do the following within a single
|
In case, you are looking for tar.gz, take a look here and vote for it: #277 |
FYI this branch should be updated to use Archiver v4: |
What does this PR do?
This is a pretty invasive PR. I've got the basics working to support all sorts of compression formats using the
archiver
package by @mholt. Due to the fact that thearchiver
package detects the compression format from file path, thetype
attribute has been marked asDeprecated
. Compression format is now detected fromoutput_path
.Supported formats/extensions:
.zip
.tar
.tar.gz
&.tgz
.tar.bz2
&.tbz2
.tar.xz
&.txz
.tar.lz4
&.tlz4
.tar.sz
&.tsz
Why is this change being proposed?
I'm a maintainer for the Heroku Terraform provider and I'd like to create the ability to deploy a Heroku "slug" from Terraform. A slug is just a
.tgz
file with anapp/
directory in it. I wanted to avoidnull_resource
with alocal-exec
provisioner and felt adding more compression formats toarchive_file
was probably the most correct answer.Once I dug into the internals I saw that the way the internals were structured was pretty close to how the
archiver
package worked, except thearchiver
package supports all sorts of goodies.What's left?
I need to flesh out
source_content
,source_content_file
, andsource
blocks. As it stands,archiver
only supports archiving a list of files/directories. I'm thinking of usingioutil.TempDir
to write the contents out to a temporary directory before passing the file list off toarchiver
.Maybe update to
dep
? I switched the Heroku provider to it and like it quite a bit more thangovend
andgovendor
.TODO
source_content
andsource
blocksvendor/