A problem was identified in February with the camlp5 7.03 package when installed via opam. Under certain circumstances, it is possible for the package removal instructions to execute rm -rf / with potentially
devastating consequences for your files if your rm command is non-GNU (and so doesn’t support the --preserve-root default option) which includes macOS and other BSDs.
Initially, this was seen non-fatally on GNU/Linux systems and it was believed to have been successfully patched on 18 Feb with only a 48 hour window for problems for anyone who updated opam between 16 and 18 Feb and then hadn’t updated since, however we failed to take upgrading the system
compiler into account. If you haven’t updated opam since 18 Feb 2018, have camlp5 installed in your system switch and upgrade your system compiler to OCaml 4.06.1 using your OS package manager, then your system is at risk from this issue.
Most regrettably, several users have been hit by this issue. This issue affects opam 1.x only - if you have been testing the opam 2 release candidate then your system is not affected (but we still recommend you run opam update regularly). opam 2 Release Candidate 2 includes sandboxing which would prevent this kind of issue in future.
Opam installed from brew doesn’t seem to have root access, and I would guess most people installed it like that. Of course, it would be bad enough if it deleted your user folder. What are the chances for that?
Indeed, opam actually displays a warning if run as root. But deleting the user’s HOME is quite bad already.
The reports we have been getting on the tracker, e.g. https://github.com/ocaml/opam/issues/3316, point to at least 3 victims of the issue. With the upgrade of the system compiler on Homebrew, though, it might concern many more.
I’m curious, how often and why do people upgrade their system compilers? The only time I upgrade my system compiler is when something gets so messed up that I have to reinstall OCaml from scratch.
I think of the system compiler as just what you need to get started. I might use it for a while, but the next time I want a new compiler, it’s always in a new switch, and after that I might never use the system compiler at all, as I understand it. When I create another new switch, I’m building it from the current switch. Is that incorrect?
Well, in my case, even though I use the ocaml compilers from various switches, the system ocaml gets upgraded once in a while by the package manager of my linux distribution, whenever I do a system update.
I guess it’s similar for homebrew and such, when you ask it to upgrade all the installed packages,.
@dra27, who wrote it, may be able to tell you more, but the script does scan the entirety of $HOME to look for opam roots. If you have tons of files and a HDD, that can indeed be prohibitively long. Could that be your case ? You mentionned other issues ?
I suspect that it would be best to prompt the user to ask if they have other opam roots. If they don’t know, that means they don’t. The slow speed is a real problem in itself — one probably has to warn that it is scanning the entirety of the home directory and that it might take a very long time. That said, it is really slow at the beginning, and it hangs at the end, which is weird.
What’s the intended effect of the
' opam-detect.sh '{}' \;
at the end of the script? Note that the quote marks there are unbalanced, and I’m not sure what it would be for in the first place.
[Edited to add: I now understand. You’re doing a multiline find command. This is quite non-idiomatic shell.]
By the way, the env trampoline for sh is unnecessary, it is defined in POSIX that /bin/sh is always available, and I’m not sure there has ever been a Unix machine without an sh in /bin/ since the very early 1970s.
BTW, the use of exec in this manner for a full shell script off of a find is going to be extremely slow, and non-idiomatic. A simple find for files named config in a for loop, as in:
for i in `find . -name config -type f -print`
do
...
done
will be significantly faster because of the lack of exec calls to new shell scripts. It’s also more idiomatic.
[Edited to add: Although I have a couple thousand files in my hierarchy named “config”, I think this is not the speed issue here. The find itself is quite slow, and I have an SSD.]
@perry: your suggestion with processing find in a loop is fundamentally wrong, and any script you’ve ever seen doing that is broken. You cannot process the output of the find command: there are no exceptions. The output of find can only be processed using -exec. I’m not that convinced that the shell invocation will be that much more expensive than any other process invocation, and I wanted to avoid the need for two shell scripts, which is why it’s done on a multiline string constant (I don’t really buy comments about being idiomatic or not with shell scripts, to be honest). Posix may define that /bin/sh is always available, but on Solaris it’s not necessarily a Posix sh. I use $(...), so I need a Posix sh, hence the trampoline. Not that I expect any Solaris users are affected, but I find having scripts which actually work to be infinitely more important than stating facts about a standard which is rarely followed properly in any implementation! I have an outstanding bug in opam2 because busybox’s ps doesn’t implement Posix minimum command line args…
Coming back to the point, I’d love to look into the detail of “behaving badly”. What bad behaviour did it exhibit? The output of the script begins by saying the directory it’s scanning, I guess I can add This may take some time. The full scan should not be done as an option - various instructions at various points have recommended copying .opam roots to back them up; I’d prefer to ensure that they’re all located, rather than relying on a user’s memory of whether they’ve made any. When you say “mysteriously hanging at the end”, how do you know it was the end? How long did you leave it? That could be debugged by adding echo "Scan complete" >&2 to the bottom of the script.
In basic summary: I’ll happily make the script faster, but I’m not going to recommend people download a script which might go wrong if they happen to have some weird filenames in their HOME tree…
First, on Solaris, the env command will generally find /bin/sh, so your command will probably fail on Solaris unless you use backquotes. You can’t avoid that mess unfortunately. It was discussed recently in some Mantis ticket IIRC. But this is a side issue.
We’ll have to agree to disagree there. I find the use of -print0 covers my needs properly (presuming you’re worried about special characters in file names), but this doesn’t matter, it isn’t causing trouble for the users. If you want, we can discuss shell tricks another time, it is also a side issue.
The main problem I think is the length of time the find command takes. Most of what I’m seeing can be explained by the fact that the find is very long. Perhaps it can be avoided unless the user is aware of the existence of multiple roots. Most people don’t know they’re even possible, and if they don’t know that it is possible, you can just look in .opam.
Oh, ah, the page I was reading from noted further that that trick isn’t actually correct, and only works for bash, which would defeat the point. My bad, however irrelevant.
Sure, but your demonstration was -print, and -print0 is not portable! I agree it’s a side-issue - but it’s one you brought up! I would still like to get back to:
you used the word “including”, but so far have only referred to the find command being slow. What was the other bad behaviour?
Okay, just to be clear, I thought it was hanging after finding my .opam dir after a long time, but it wasn’t, it was just continuing to search my (huge) SSD for more things. The only problem seems to be the time it takes. (I really do have several thousand files named “config” in my home dir, but it turns out that the exec time for the shell is small compared to the read time for the whole SSD.)
OK, thanks - definitely worth tweaking the messages so that it’s clear that it could both take a while and also that it’s finished; I’ll push a revision.