-
Notifications
You must be signed in to change notification settings - Fork 56
remove reliance on rapidsai/miniforge-cuda (and therefore ci-imgs), upgrade cuvs-bench-cpu to miniforge 25.9.1-0 #836
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
3972d33 to
48075ba
Compare
| ARG RAPIDS_VER=26.02 | ||
|
|
||
| FROM condaforge/miniforge3:24.11.3-2 AS cuvs-bench-cpu | ||
| FROM condaforge/miniforge3:${MINIFORGE_VER} AS cuvs-bench-cpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling out that this is technically a breaking change, because this updates the condaforge/miniforge3 used as a base for rapidsai/cuvs-bench-cpu from 24.11.3-2 to 25.9.1-0.
I think that's acceptable, especially because the builds do a rapids-mamba-retry update --all -y -n base a few lines down, but just calling it out.
I tested this using the command below, with a CPU-only algorithm and the smallest dataset listed at https://github.com/rapidsai/cuvs/blob/bae4cdbd0003c1572c0043541ff9826a2628762a/docs/source/cuvs_bench/index.rst
IMAGE_URI="docker.io/rapidsai/staging:docker-cuvs-bench-cpu-836-26.02a-py3.10-amd64@sha256:9026429656d09df7e2f59e75d6c6c4db0a112251378cce7577c45e8547404346"
docker run --rm -it \
"${IMAGE_URI}" \
"--dataset fashion-mnist-784-euclidean" \
"--normalize" \
"--algorithms hnswlib --batch-size 10 -k 10" \
""And it seemed to run ok.
| ARG RAPIDS_VER=26.02 | ||
|
|
||
| FROM rapidsai/miniforge-cuda:${RAPIDS_VER}-cuda${CUDA_VER}-base-${LINUX_VER}-py${PYTHON_VER} AS cuvs-bench | ||
| # --- begin 'rapidsai/miniforge-cuda' --- # |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a lot of boilerplate to include twice. We're sure this is better than having a common miniforge-cuda image and using that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really really think it is worth it.
Being able to focus https://github.com/rapidsai/ci-imgs only on RAPIDS CI and not need to think about how changes there will affect users on Brev, Databricks, Sagemaker, etc. is well worth the separation duplication.
Also these images get published to NGC and go through more rigorous compliance and security scanning than the CI images, so it's helpful to know we can make changes in https://github.com/rapidsai/ci-imgs that won't endanger our ability to release these on-time.
It'll be a lot less boilerplate in a follow-up PR where I remove the things that are unnecessary in this context (for example, we don't build RockyLinux images here so anything about that OS can be cut out). I intentionally made this PR almost a straight copy-paste of the Dockerfiles so we could have high confidence this wasn't changing the resulting images too much.
I think there's also significant opportunity to move some of the identical code into scripts in shared context, instead of having them repeated across the Dockerfiles.
Contributes to #832
Removes reliance on
rapidsai/miniforge-cudahere, and therefore fully decouples this project from https://github.com/rapidsai/ci-imgs.Notes for Reviewers
I've done the following here:
FROM rapidsai/miniforge-cudawith the relevant contents of https://github.com/rapidsai/ci-imgs/blob/main/ci-conda.Dockerfilesccache, to avoid needing to pass around a GitHub tokenci-conda.DockerfileunchangedThere are things in the diff that look unnecessary (like handling the possibility of rockylinux8 when we only build Ubuntu images here)... let's defer those cleanups to follow-up PRs after this.