Some more possibilities for distributing applications: flatpaks, snaps, appimage.
They have their downsides though:
If your application is not a GUI I’d recommend shipping it as a Docker container though. That should be fairly portable to run on most distributions, and even on distributions that do not have Docker available by default: they have podman
which does a pretty good job at running them (there is also now an OCI standard for containers).
And aside from a Docker container as packages built on the release you are shipping for. Although building on an older version and using it on a newer one might in some limited cases, I’ve run into trouble with that in the past when upgrading between minor CentOS 7.x versions (there was an ABI breakage in libnss, but CentOS has purposefully overriden the soname and claimed there was no ABI change, but of course things broke at runtime when built on old NSS and run on new, solution was to rebuild the application requiring the newer NSS as a minimum dependency).
Building and shipping individual executables without a package manager and without containers is likely to run into problems eventually (especially as newer distros get released). There was also an older standard called Linux LSB (and you could configure your compiler to be LSB-compliant, i.e. only make headers/libraries/symbols available that the LSB defines). However in practice there were some crucial symbols missing there, and not many distros are actually LSB compliant (so e.g. trying to build and ship something like nginx
using LSB didn’t quite work).
You could use a service such as openbuildservice.org that supports building packages for multiple distributions (both rpm and deb based): openSUSE:Build Service supported build targets - openSUSE Wiki
Another possibility is to define a list of distros you support and use containers to build a package for each distribution (this is actually fairly simple to automate and parallelize and if your package is small building it for all distros will be done in no time, especially if you use Docker layer caching effectively to preinstall your build dependencies in a layer).
Nix closures might also work, but they’re a bit heavy (I think you would end up shipping everything including a libc, which might be older than what the user has on their system), although in the end Docker images are the same, except the libc version you get would be more “well known”.
Although pip might seem like a nice solution, in practice I’ve run into various compatibility issues especially with libraries related to numpy, opencv, ghostscript. I ended up either having to install distro packages for some, and use pip prebuilt packages for the rest, or forcing pip to rebuild all packages and not use binary packages. This is especially problematic when a new distro comes out, e.g. with Python 3.9->3.10 version bump and nothing on pip supports it quite yet. None of the py 3.9 prebuilt binaries would’ve worked in that environment. Here is one such example where pip binary packages go wrong (especially as various pieces of the python ecosystem upgraded their numpy dependencies at different rates): Install numpy-1.20.0rc1 causing errors · Issue #534 · dask/fastparquet · GitHub