-
Notifications
You must be signed in to change notification settings - Fork 4
fix: ensure spack env exists before activation #131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR attempts to add spack env create ${ENV} before activating a Spack environment, supposedly to ensure the environment exists before activation. However, this change introduces an incorrect command that conflicts with the existing directory-based environment activation pattern.
Changes:
- Adds
spack env create ${ENV}on line 55 beforespack env activate --without-view --dir ${SPACK_ENV}
| # Concretization (default environment) | ||
| RUN <<EOF | ||
| set -e | ||
| spack env create ${ENV} |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The spack env create ${ENV} command is incorrect and unnecessary. The command creates a named environment 'xl' (from ${ENV}), but line 56 activates a directory-based environment at /opt/spack-environment/xl (from ${SPACK_ENV}). These are two different concepts in Spack. The directory-based environment already exists with a valid spack.yaml file copied from the build context on line 44, so spack env activate --dir can be used directly without prior creation. This pattern is consistently used elsewhere in the Dockerfile (lines 176, 281, 320) where environments are activated without creation. The added command should be removed.
| spack env create ${ENV} |
|
Once again, regardless of whether this is submitted from a local branch, this is an incorrect change. I have commented that before. Copilot has now commented it twice too. What is your motivation here? |
|
For motivation, see #23 (comment). I am not sure whether creating a named environment is the right solution. The proposal is based on my experience with named environments, where I did not encounter this issue when nothing can be installed interactively later in the same environment. The CI shows a couple of failing checks, but it is difficult to tell whether they are caused by the proposed change or by something unrelated. |
|
There is no difference between named and anonymous environments for the use case you link to. The only thing that's different is that named environments are directories under a specific internal path. The only thing you get out of a named environment is that you don't have to type the whole path prefix. But both locations are going to be read-only in a singularity container. If something doesn't work in directory environments, it won't work for named environments. Installing additional packages inside the environment without changing it can be achieved with concrete_environment. Installing additional packages which do require changing environments is not going to work inside singularity containers. You'd have to copy the environment, and work in a separate environment. |
I mean |
|
From various sources, I had the impression that interactive work with EIC containers is encouraged (for example via It was a bit surprising to me that a straightforward command like That said, I’m perfectly happy to work around this by creating a new environment based on the included concrete environments. Are there recommended EIC instructions for this kind of workflow? I imagine I’m not the first person to try installing packages interactively with Spack inside EIC containers. |
|
@wdconinc This was largely meant to build the container so that @plexoos can test it out. That should be now possible by pulling https://github.com/eic/containers/pkgs/container/eic_xl/642847383?tag=unstable-pr-131, if I'm not mistaken. |
Prediction: The named environment will exist but will be empty. It is created but it is never activated or anything added as configuration (and certainly not the contents of spack.yaml). Instead, the anonymous environment in the directory is activated (it does not need to be created, since the spack.yaml configuration already exists there). |
|
Sorry, I missed your post earlier.
Working with containers is indeed encouraged, in particular working with singularity containers since we run on sites that generally don't allow docker use. We have been trying hard to support workflows that are possible in singularity, and to avoid developing workflows that only work in docker. Singularity has therefore always been the reference technology when talking about using containers because, whereas docker gives you full write access to everything, singularity is the more restrictive technology.
The command We are (ideally) using environments that using unified solutions for package dependencies: the environment contains one single version of each package (roots and dependencies), with a unique configuration of variants for that package. The reason for this is that we don't want to have (accidentally) multiple ROOT versions in a container, or otherwise end up with inefficient duplication of packages (which can cause all kinds of other issues). The unified environments mean that adding a package with
A lot of the container's spack infrastructure is set up to allow the installed packages to be read-only while allowing users (in singularity) to define their own environments that re-use (with preference) the already installed packages, but also allow them to install additional packages. That means defining your own environment and installing it, even if it re-uses already installed packages. That is what is (to some extent) tested in CI, https://github.com/eic/containers/blob/master/.gitlab-ci.yml#L487-L517. |
Resubmitting #25 from a local branch, as suggested by @veprbl