I don’t have a problem with strict aliasing and I don’t remember a time when it wasn’t in the C standard. Quite possibly it wasn’t in K&R, I don’t know. The rules are pretty straightforward - don’t dereference a pointer to a particular type unless an object of that type is actually at the address in question, or you are dereferencing a pointer to char/unsigned char/signed char (signed char permitted in C, not in C++). Or in other words, don’t use pointers for type punning except at the byte level. But if you don’t like it, all compilers I know of have a flag to turn strict aliasing off.
What I was objecting to is breakage and instability. In the context of pointer dereferencing in particular I was complaining about std::launder. More generally I was complaining about the disgraceful state of affairs reported in P0593R6: Implicit creation of objects for low-level object manipulation.
Placement new is necessary for types which have constructors which actually do something. I don’t think I would describe it as a hack, but the fact that array placement new is broken is also disgraceful. To require trivial types to be constructed this way where uninitialized memory is in use, and so break the long standing practice of using assignment or memcpy to do so, is in my view unaccepable.
Nix can be configured to look in a binary cache. That’s what Hydra and Cachix provide. The idea is you should never need to compile the same artifact more than once, across all your developers’ machines and your CI.
As I wrote elsewhere, this isn’t enough. Package names and versions need to be human-readable and -writable. It is not sufficient that the source artifact has a human-readable name/version, because the binary artifact is the one that’s going to be installed.
Also, I feel I must note again that I was talking about -production- systems, not developer desktops. There, installing any sort of compiler is strictly forbidden. And from this follows that developers should use (as much as possible) the same binary package-managers for as much as possible. Why? Because developers are -paid- to develop software that will later run in production. Having a large amount of skew between dev and prod is always bad, never acceptable.
To your second point, I don’t see the difference. As I said, Nix cache can work for CI builds i.e. the builds that actually get deployed to production.
“git commit SHAs”. But that’s not what’s used to communicate durably between humans: tags are.
the human-readable name is for the source package/version, not the binary artifact after being built, right?
Look: I’ve actually maintained fleets of machines into the hundreds singlehandedly (yeah, this was before “cloud”). And to do that, you want -binary- packages, never source-code. B/c errors happen, and when they happen you want things to -stop- (not fall back to compiling the source).
Also, does Nixpkg have a notion of the “tee diagram”? That is, there’s the source artifact and the binary result artifact. There are the other dependencies that need to be around, to -use- the binary result artifact. And there are the tools that need to be around, to convert the source artifact into the binary artifact (to “build” it). Does Nixpkg have these distinctions, so that you can install (say) a binary HTTP server, without having to install GCC simply because the HTTP server was compiled with CC ?
Yes? I don’t see why not. I mentioned an example of a git package. It’s installed as a binary, without needing a gcc toolchain to compile it first.
Edit: Nix is a way of describing how to get or build packages. The description can be ‘download this tarball, untar it, and move these files into these locations’, or it can be ‘download, untar, run a build using such-and-such toolchain, then install’. The details can be abstracted. You can describe an entire system as a Nix expression, describing all its dependencies. It takes care of enforcing the dependencies and then running any commands you specify to build and install it.
Look at Debian: there are source packages, and binary packages, and a process for building the former to produce the latter. There are “depends” and “build-depends”. “Install binary packages” doesn’t mean “hey, we can take a binary tarball and install it”. It means that there’s a process for taking source and producing a binary installable, which IS the artifact that gets installed, and the source is not in any way, neither conceptual, nor actual, involved in that installation.
It’s the difference between running a installer, and running “checkinstall make install”. The latter is cute, but it’s not a substitute for an actual installer and actual binary install-artifacts.
But a ‘install the binary artifact but fall back to building the source if not available’ is exactly the strategy one would want for local development (which is where this all started–people were discussing cargo vs opam after all).
Anyway, Nix solves a lot of the headaches of development and deployment–in fact it does everything you specified for real-world use–which is why I think it or something very like it is in the mix for the future of development.
The problem with Makefiles is that they are too high-level and flexible. It is possible that it would actually be difficult to integrate two makefiles together or a dune config and a make file together. Think of combining 2 high level languages and how difficult that is (FFI etc).
But ninja OTOH is sufficiently low level and you could inline it within the dune file. Think of it as asm statements within your C/C++ code. Its ugly but given that asm is sufficiently low level, the C compiler can control things by providing the asm blackbox what it needs and take what the asm blackbox produces.
It’s true that ninja is meant for executing and not writing. (I would argue that it is is better for reading than a Makefile but that is my own experience).
But you can write ninja statements and that would be a price to pay for the ability to be able to integrate the dune build with some externally generated build code.
What would be even more awesome is that dune could always parse a ninja rules file and produce decent dune config.
So what I think would actually be feasible would be using ninja as low-level format for build related information interchange (instead of Makefiles).