-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Manually pin pypi versions #386
base: main
Are you sure you want to change the base?
Conversation
33b906d
to
daf142c
Compare
WORKDIR /build | ||
|
||
ARG MAX_JOBS=4 | ||
ARG MAX_JOBS=-1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to cap the number of compilation workers? Is this to be a good citizen a CI system somewhere? I think if you manually build with --build-arg MAX_JOBS=N
; you invalidate the cache, so it's not really practical to have this as an externally-settable variable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It depends on available memory. If you pick too high MAX_JOBS the docker with our dependencies will not build without cache, ie at least it was happening in bionemo1 and TransformerEngine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does it mean --build-arg MAX_JOBS=N invalidates the cache?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect (at least this was happening locally) that if you pass in a MAX_JOBS argument via the command line, you'll only cache layers that are downstream of that particular configuration. So if I build locally with
docker buildx build . -t bionemo --build-arg MAX_JOBS=-1
That won't cache those built layers if I later build and omit that MAX_JOBS argument.
@@ -14,7 +12,8 @@ RUN git clone https://github.com/NVIDIA/apex.git && \ | |||
cd apex && \ | |||
git checkout ${APEX_COMMIT} && \ | |||
pip install . -v --no-build-isolation --disable-pip-version-check --no-cache-dir \ | |||
--config-settings "--build-option=--cpp_ext --cuda_ext --fast_layer_norm --distributed_adam --deprecated_fused_adam --group_norm" | |||
--config-settings "--build-option=--cpp_ext --cuda_ext --fast_layer_norm --distributed_adam --deprecated_fused_adam --group_norm" && \ | |||
cd .. && rm -rf apex |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a big enough change that it's probably worth finally cleaning this up -- we need to delete these files inside this layer or they'll always be in our layer history and contribute to our overall size
rm -rf /tmp/* | ||
set -eo pipefail | ||
uv pip install --no-deps --no-build-isolation -r /requirements-docker.txt | ||
uv pip install --no-deps nemo_run@git+https://github.com/NVIDIA/NeMo-Run.git@${NEMO_RUN_TAG} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nemo_run has an undeclared hatchling
build-time dep, so we need to install it separately
requirements-docker.txt
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
created this by diffing pip freeze
from the base image and our current bionemo2 image, and cleaning up a few lines (apex, TE, etc) that we still install separately.
d5d9b18
to
50278da
Compare
/build-ci |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this file lets us check the sanity of the requirements-docker.txt
file (if we run it, it shouldn't change), and also lets you more easily add new dependencies -- you can add lines from requirements-docker.txt
, run this script, and it should add new dependencies of that package.
WORKDIR /workspace/bionemo2 | ||
|
||
# Install 3rd-party deps and bionemo submodules. | ||
COPY ./3rdparty /workspace/bionemo2/3rdparty |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we set workspace/bionemo2 as some variable for reference in the docker? it used to be BIONEMO_HOME. Or is it automatically set as HOME?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So right, actually this is a good point, we should clarify this in the design doc. I would prefer to not copy in 3rdparty
and subpackages
into the container (likewise with the docs/
and scripts/
we copy later).
This PR doesn't actually change anything about the default user in the release container, or how tests are copied over
We could just create a bionemo
user with some $HOME
directory that users would use? But I think that requires us to
(1) align on whether it's OK to omit tests and scripts from the image itself, with the expectation that users would mount files into the image they want to run
(2) change CI so that tests are mounted into the container
If we’re sticking to the pytorch base images, we may need to do a lot of this dependency pinning manually — tools like pip-compile or uv sync won’t play nicely with the mix of inherited + newly installed packages.
Creates a large requirements-docker.txt file that pins all of the packages we add during our container build. This would hopefully let us be a bit more explicit about our 3rdparty dependencies in our images.
I'll try to gather feedback on container design in this document: https://docs.google.com/document/d/1Q81dnKPvHsnKn9c5Qc5EEKXDPJVZUA29c19Aj0CumkQ/edit?usp=sharing
Still TODO would be some way of automatically generating this file from "scratch" (maybe from some
requirements.in
file)