You will all be familiar with the difficulty of contributing a large new set of libraries into modern opam, due to the need to establish upper bounds on third party packages that use your libraries that have made breaking changes to interfaces. This difficulty exists due to how successful we have been at importing in multiple versions of the same package into the opam-repository, and leveraging the opam solver to decide on the freshest versions of packages to install.
In order to make the contribution process a little easier, we are experimenting with a new release strategy for the latest large library set to drop into opam-repository (in this case, the new JS Core release). It is as follows:
add upper bounds to all opam-repository packages that use your library in one PR. This prevents the newer libraries from being selected by the solver for existing packages.
import the new libraries, knowing that they will not be selected by the solver for existing packages.
relax the upper bounds on third party packages that have been tested to work with the new release.
This three-step process effectively allows the new versions of a library set to be staged in the opam repository without causing a massive interlock with third party packages. It also (via step 3) gives maintainers time to get used to a new library set without users being affected.
The main downside to this approach is that it may break development pins, since that metadata exists separately in each upstream repository and needs to be updated with upper bounds (or preferably, the code fixed to support the newer library).
The more minor downside is that it involves the maintainer of a library you depend on adding version constraints to “your” packages. Since the only change here is to add an upper bound to an existing dependency, we judge this to be acceptable to the long-term health of the opam-repository database. However, maintainers do have ample chance to comment on the step 1) repository if you really do not want anyone else making changes to your packages. In this case however, we expect the maintainers who have objections to step up and do some testing with the newer releases and/or propose any changes in good time.
I’d like to emphasise that this is an experiment at the moment and not firm opam-repository policy, but one that we feel is worth trying. If it does work, then we can improve the tooling around the 3 steps to make it as easy as possible for maintainers. In the meanwhile, you can see the first step of it in action in ocaml/opam-repository#13010 which adds upper bounds in preparation for the new version of Core.
Could you please give more details on how exactly these upper bounds are going to be added and later removed?
My concern here is distinguishing between potentially transient version constraints and ones provided by the package maintainer. Maybe it would be better to have separate files for system-generated constraints in the OPAM repository?
They were added by almost systematically adding < "v0.12" to all dependencies on Jane Street packages. I said almost as @xclerc who wrote the script was careful to not add them when it wasn’t necessary, for instance because the existing constraint was of the form = "<specific-version>" or there was a conflict of the form >= "...".
@mmottl, regarding how the upper bounds will be relaxed, one idea is to have a tool try to relax them and do it when possible. Why do you think it we should distinguish between the two kinds of constraints?
@dbuenzli, the last few stable releases of Jane Street packages have been becoming increasingly painful for us, and this experiment was my idea in order to make the next one smoother.
To describe the problem we are facing, when we release a new major version of Jane Street packages in opam, one person has to release more than 100 packages at once. Such a release usually represent 6 months of active development by many OCaml developers and inevitably contains several breaking changes. As a result, several existing packages are not compatible with the new version of Jane Street packages.
In order for the release to go through and be merged in the opam repository, we need to add upper bounds to these packages. To find out where these bounds need to be added, we need to analyse the output of the opam CI. This process usually requires several iterations of finding and adding upper bounds.
The CI takes a non negligible amount of time to provide an answer and produces a large volume of information from which it is tedious to extract the relevant parts. As a result, publishing a new major version of Jane Street packages has become a very long and painful process.
With this new strategy, the release should go through quickly and leave the opam repository in a consistent state. The work of adjusting version constraints can then be shared between various people and hopefully automated via new tools.
Wouldn’t such an automated tool simply try to build and constrain what fails ?
I have the impression that the problem is this:
rather than the current optimistic strategy.
For example what would happen if you simply unconditionally constrain failling packages that depend on the packages you submit ? I’m sure opam or CI could actually spit out some kind of repo diff with the constraints directly added to the failing packages.
You then get a simple two iteration submission procedure. Submit, get the failing package constraints patch from the CI and apply it on your PR.
We often get false negatives though, so it’s likely that they’ll get in the way and that additional post-release work will still be required.
One thing I like about this approach is that it makes releasing a new major version be a no-brainer for developers, and it then leaves the package manager deal with the package management side of things.
Well if you constrain something you shouldn’t have, it’s not worse than constraining everything like you suggest.
One thing I dislike about this approach is that it puts the strain on end users which will have to complain to unconstrain things that have been constrained and shouldn’t have. I rather think the burden should be on developers that break backward compatibility…
on one hand, from a practical stand point, it makes perfect sense and i understand that for janestreet it is something that would help a lot with dealing with core lib releases.
On the other hand, as a user, i selfishly wish this does not become the rule and remains contained enough. Every day i have to deal with both the cargo/crates.io rust packages and opam, and i love how friction-less opam is for the user.
I understand that this come from the awesome work from maintainers, and that it might not be tractable in the long run, but if you compare the optmistic “constrain when it breaks leave freedom to the user” opam workflow with the crates.io pessimistic “everything is constrained, now go ping the maintainer to make a dummy release to bump deps”, i wish almost eveyday crates.io was like opam.
Adding the upper bounds is pretty straightforward (and was done in the #13010 PR referenced earlier). The benefit to end users is that the newer releases of the library being imported can then be added to the mainline repository safely, and teething problems (for example with portable builds for BSD or Windows or non-x86) be smoothed out without affecting users.
The difficult bit in the past with opam1 has been how to relax the package constraints afterwards. The reason we’re trying this approach now is that opam2 has a handy opam install --ignore-constraints-on=<pkg> option which should let us run builds with some upper bounds removed, and then systematically commit those to the opam-repository.
Semantically, there is little difference with the current state of affairs if steps 1) 2) and 3) are all executed. There will be a difference if we forget to do 3) and leave the upper bounds in place unnecessarily. That’s the nature of this experimental Core release – to see if we can develop an approach to make it easier to relax upper bounds without blocking an entire release for months due to having to iterate on the entire tag/publish/test cycle for a bunch of libraries.
There’s actually a big difference. Repository users will see transient states they did not see before.
That’s true, but it shouldn’t change the user experience. For some time, you will simply not be able to upgrade to the latest version of Core that was just released.
There is a good reason for distinguishing between constraints: some constraints are necessary not for preventing build errors, but because of semantic incompatibilities.
E.g. a package maintainer might know that their package triggers bugs in some versions of a dependency. Or some version of a dependency introduces subtly different semantics that breaks certain package versions.
It is surely possible to detect most build errors automatically and hence automatically introduce constraints such that users don’t run into build errors. But even this is not necessarily reliable: just because some package combination failed to build on your build farm doesn’t necessarily mean it will fail on someone else’s seemingly equivalent platform, and vice versa. Some build problems can be highly dependent on your exact installation so we’ll always have both false positives and negatives.
I think that version constraints provided by package maintainers should always be maintained and probably kept separate from any automatically inferred constraints. Package maintainers typically know that there is some inherent build problem (possibly platform-dependent) or semantic incompatibility. Their constraints should only reflect that, not compatibility with the OPAM build farm.
Maybe some OPAM flags could be added to selectively ignore automatically introduced constraints only?
Dealing with two parallel sets of constraints might make things more complicated though. It seems to me that reviewing and possibly editing what the tool produces before accepting it should be enough. I.e. the tool could just open a PR that is then reviewed by the package maintainer. What do you think?
As @jeremiedimino notes, that’s the point of the upper bounds in the stable repository. As I noted in my original post, the primary observers of the new packages will be development pins, so we’ll just have to monitor how that goes with this release.
Hm, this is an interesting point. The vast majority of constraints in the opam repository are due to easily detectable build errors, but the constraints due to runtime behaviour do exist and are very important to preserve. For example, we’ve used them in Mirage to workaround broken library combinations for some device drivers.
I think the useful distinguishing flag isn’t “automatic” vs “manual” constraints, but build-time vs run-time failures. Given the relative rarity of the runtime failure constraints, perhaps we can annotate just those with a flag to indicate to CI that it should never relax them even if a build passes.
Right, the constraint distinction is more between build- vs runtime. If there is a good way of annotating constraints that should never be removed automatically (for whatever reason), that would work for me, too.
It seems to me that a simple rule would be that you only add constraints to an unconstrained package you never modify them (which is somehow what the procedure mentioned above does).