Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential ways to integrate a task into gradle dependency resolution #1196

Open
fxshlein opened this issue Oct 18, 2024 · 3 comments
Open

Potential ways to integrate a task into gradle dependency resolution #1196

fxshlein opened this issue Oct 18, 2024 · 3 comments

Comments

@fxshlein
Copy link

Hi! I'm working on a very similar gradle plugin (for internal use), and the actual way to how I resolve minecraft is fairly similar to what you do (as far as I've seen):

  • Register a directory as a maven repository
  • Download the minecraft jar
  • Apply Mappings, Unpick, etc.
  • "Publish" it in the maven repository directory
  • (Somehow do all this before anyone cares whether the dependencies exist)

If I'm not mistaken, I saw that you achieve this by running the steps after project is evaluated. I found this kind of an annoying way to do it, because you loose all the nice benefits from using tasks, like caching and up-to-date checks. Since I apparently have too much time, I looked into this for a long while, and I think I found some interesting things, and I wanted to share them with you in case they are useful :)

This is not a 100% solution, of course, and I only implemented the IDE support for IntelliJ. I saw you also support other IDEs.

This is mainly based on Gradle supporting the following:

dependencies {
    implementation(files("hello.jar"))
}

This way, a file collection can be used as a dependency. Interestingly, a file collection can also carry task dependencies alongside it, and gradle actually supports this for dependencies, mainly so that you can do something like:

val jarTask = tasks.register<Jar>("generateJar") {
    // TODO
}

dependencies {
    implementation(jarTask.map { it.outputs.files })
}

This will actually run the generateJar task whenever the dependencies are resolved by Gradle. This is sadly kind of useless for our use cases, since IDEs have horrible support for this, and they won't be able to jump into the sources (for example). The IDEs don’t resolve the dependencies, they only go off of the actual maven coordinates you specify. But I found out that you can do this instead:

// this task generates the final maven artifact, your imagination is the limit:
val task = tasks.register("generateMavenArtifact") {
    // TODO: Generate a maven artifact in a directory that was previously configured as a repo for the project
}

// create an empty file collection, which is "built by" the task:
val fakeCollection = files().builtBy(task)

dependencies {
    implementation(fakeCollection) // <- make 'implementation' depend on the task through the file collection
    implementation("com.test:artifact:1.2.3") // <- specify the actual dependency generated by the task in any way you want
}

This way com.test:artifact:1.2.3 can be generated by the task, and gradle will run the task before resolving the dependency (it does not even seem to matter in which order they are specified). But the maven coordinates are also there, which makes the IDE happy.

Interestingly, we can also do this now to dynamically handle the dependency requests:

configurations.implementation.configure {
    // This is called right before the configuration is resolved and the tasks are executed:
    withDependencies {
        for (dependency in this) {
            if (dependency.group == "com.test" && dependency.name == "artifact") {
                // Dynamically create a task for specifically generating the requested version:
                val versionTask = tasks.register("generateVersion${dependency.version}") {
                    doLast {
                        println("Hello, I'm ${dependency.version}")
                    }
                }

                // Make the main task depend on the version-specific task:
                task.configure {
                    dependsOn(versionTask)
                }
            }
        }
    }
}

Now, IntelliJ sadly does not give a damn about all this. The idea-ext plugin does actually provide a way to specify afterSync tasks, but annoyingly, these are executed in a separate run window, and I've noticed in the past that they can be somewhat unreliable. Instead, I looked at how kotlin JS solves this issue - because they have to run things like npm install. Turns out, if a tasks named prepareKotlinIdeaImport exists, intellij will add that to the requested tasks when syncing... so you can just create one yourself, and it will be executed as part of the IDE sync. I basically just did this:

tasks.register("prepareKotlinIdeaImport") {
    dependsOn(configurations.compileClasspath.get())
}

That way, intellij will resolve the compile dependencies, which will also run those tasks (including the dynamic one for the version).

I hope someone finds this useful, maybe the IDEA trick is less hacky than this for example. I simply wasted too much time on it to keep for myself, feel free to just close this of course 😄

(Also, wanted to say thank you for all the amazing tooling the fabric team created. It's great to work with.)

@modmuss50
Copy link
Member

Hi, this is actually something we have dicussed before and possibly something we want to look at doing, there are quite a lot of unknowns though and its far from a trivial amount of work, it would likely require a loom 2.0.

The trick of still using a maven repo is neat, but I do still have some concerns/questions that you might be able to help answer.

  1. How do we depend on the minecraft libraries/loader libraries? As these come from online metadata or from within the loader jar. We cannot add depedencies while the configuration is being resolved. I know neo solved the mc libraries by publishing a maven pom with all of them, this wont work for us as we apply some processing to these depedencies to fix issues.

  2. Does attaching sources work? We need to be able to remap Minecraft and mod sources as well. This was always a blocker against using Artifact Transforms

  3. You mention right now we loose out on caching and up-to-date checks, but we gain the abbility to fully control this ourselves, this prevents the tasks from running more often than they may need to.

  4. Ontop of that, it allows us to share output files across sub projects. For example sub projects that have the same inputs (such as mapping and AWs) will use the same Minecraft jar. I worry that there might be some challenges doing this with tasks.

  5. Similar to the last fact, we can currently re-use "services" such as tiny remapper across what would ideally become multiple tasks. E.g when remapping depdent mods we dont create a new tiny remapper instance for each remap. Basically, I worry that the peformance would be worse. As we do everything during configuration, with the new configruation cache there is basically 0 overhead and even without that the time is very minimal.

  6. Using prepareKotlinIdeaImport is also good to know, we currently have a hack to insert a task during sync, it works quite well but if ever breaks this does sound like a possible alternative.

I will definitely want to keep this issue around, but there are a lot of questions/unknowns to figure out before we even think about going down this route. Its also not like the current approach is fundamentally broken, I know its not propper but it works just fine in 99% of use cases.

@fxshlein
Copy link
Author

Hi! I'll try to answer the questions as best as I can:

  1. I suppose this is where our use cases differ, since we go the route of just generating a POM file, and we take the dependencies "as is". I suppose there are a few potential solutions here tho:
    1. Artifact Transformers, could be fine, depending on the transformations you apply
    2. Also stick them into the maven repo directory, perhaps with a classifier so they are resolved correctly, then the POM file would just reference that classifier
  2. For attaching sources, we simply put a -sources.jar alongside the regular one in the local maven repo. IntelliJ expects it there, and I've found it finds it reliably.
  3. True, although I've already been working with tasks for like half a year now. It's a bit of work, but if done correctly, the caches and up-to-date check work quite well, to the point where I wouldn't know how to optimize it further. The "trick" is to separate all the steps into tasks so they can be cached separately. This does come at a cost tho: Overlapping tasks outputs are not allowed, so if you download, remap, unpick, etc. the client in separate tasks, you will have the jar around 4-5 times. Also, I'm not sure how well caching works across projects here, this is probably a big benefit you have from your system.
  4. My plan for this is to stick all the "important" tasks in the root project. Making subprojects depend on specific tasks from the root project shouldn't be an issue in gradle, simply by doing rootProject.tasks.named("..."). I have to see how well this goes with things like configuration avoidance tho. I think it should be good because the root project is configured anyways, so it would also have to at least build the lazy task collection.
  5. For re-using services, I use Build Services, which sounds like exactly the right thing. You can inject them into tasks. I'm not sure if they persist across subprojects tho.

I'll continue implementation and report back if I find anything else interesting 🙂

I'm still in the process of migrating everything to this new method, previously I just attached the tasks to the compilation of a plugin produced by an includeBuild, that way even intellij has to run the tasks before it can evaluate the main project... not really scalable across multiple projects tho... although I tried, and you can actually generate a temporary file and includeBuild it lol

@lukebemish
Copy link
Contributor

This approach is not dissimilar to how NeoGradle handles setting up the neoforge or minecraft dependency; the major issue that can arise, and arises there, is one of behaviour conf cache behaviour. Namely, I assume your task in question generates a pom or module file in the local-repository. Now, the pom/module of dependencies in configurations is a conf cache input! This is because conf cache includes dependency resolution state, which, like project state, is tossed out after configuration; when running with conf cache, configurations are "resolved" (which does not mean running task deps of the configuration, just finding what is in it and where that stuff will be when the task deps are ran) at the end of the configuration phase, and then wired up as normal task deps. This means that between the first and second runs, the conf cache input will have changed -- namely, the pom/module was missing in one, and present in the next -- and thus the cache will be invalidated between the two. (This is the current behaviour of NeoGradle, in fact). That said, this is not fundamentally a bad approach -- and it solves the problem of IDE source attachment, which is otherwise a true pain to solve (there's a few other approaches to this and they all suck for various reasons; I suppose the conf-cache incompatibility is just the tradeoff of this approach). There's also some worries about what happens when those deps are exposed across projects, as they can cause issues in their destination when a normal task-backed file collection dependency can be shared across projects without issue.

Some more feedback: sticking all the "important" tasks in the root project is not a direction you should be going, as the approach you noted -- rootProject.tasks.named("...") -- is incompatible with project isolation, which seems to be the next Big Thing post conf cache. Furthermore, being able to control up-to-date checks and caching yourself is huge and you miss out on that with this unless you manually re-implement it -- and, just to note, gradle invalidates all task outputs from task actions that aren't gradle types any time any plugin updates in an environment. It's a very fragile cache. All in all, a useful approach to share though!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants