I’m glad to see the topic of energy and emissions brought up and I would have to echo @avsm’s response that it is the right thing to do. We, as a community, have more or less complete control over what we deem important enough to run and deploy and when and for how long we decide to do it. I think it’s important that we realise there are costs associated with this. Some more familiar, like the maintenance burden or the monetary cost of hosting machines. And some less familiar, like the environmental impact (energy consumed by machines, the embodied energy of hardware etc.).
The work I’m currently doing is focused on getting ballpark emissions figures for the cluster of machines that run various services (like GitHub - ocurrent/overview: A brief overview of the main CI services, watch.ocaml.org etc.). We combine energy grid information (at the moment from the UK for the Cambridge machines) alongside per machine power draw to determine approximately the CO2e. With better monitoring we will be able to better measure the impact of changes we make to how we run things and also hold ourselves accountable to our environmental impact. It’s not perfect, but hopefully a start.
Before beginning to worry about is programming language X more efficient than Y, I think there are bigger wins to be had, for example, carbon intensity aware cluster scheduling. The docker-base images are rebuilt weekly, perhaps these could be scheduled based on the forecast of the energy grid carbon intensity which can fluctuate quite drastically? I hope to expand on these ideas and more soon (Q1 this year) and provide more insight into the work that’s going on probably on ocaml.org or infra.ocaml.org. In the meantime if anyone wants to help or is interested please do reach out
I’m with you on this. One of the most successful narratives oil companies pushed was to shift the blame on individuals for climate change, “carbon footprint” was coined by BP for example. It is true that the real meaningful impact is in pushing for systemic changes rather than personal lifestyle ones… However, two points:
it is still good courtesy to be mindful of one’s impact, the trap oil companies failed to account for and fell in was that systemic changes often stem from the normalization of individual actions and from discussions on a social level, when these concerns remain in public opinion, the chances of them turning into legislation increase, they only delayed the inevitable;
the other point is that OCaml community isn’t an individual, we’re using an ecosystem where the impact of optimizations multiplies in a meaningful way, and it’s run by institutions with similarly impactful presence in terms of central infrastructure.
I hope to avoid getting into some kind of ideological discussion here. Saying that something is the “right thing to do” makes it into a moral imperative, which then prevents one from thinking about costs, benefits and especially opportunity costs. Given the tiny scope of OCaml’s ecosystem, is investing effort into energy usage tracking a good use of OCaml’s limited resources? I don’t know the answer to that, but my current attitude is quite skeptical. If one were doing the same for Javascript or Python, the answer would be crystal-clear, but for a niche language, the answer is far from obvious.
If, however, the goal is to make OCaml a leader in energy saving so as to attract people to OCaml, perhaps I’d be more partial to that. However, I would still question whether this is a judicious usage of resources given all the alternative methods of attracting people to OCaml, such as building actual libraries and projects with it. Now, it’s possible that this is a form of doing the above - that there’s a growing ecosystem of energy-related applications, in which case, once again, that sounds pretty good to me, as I’m all for enlarging the ecosystem in just about every possible way.
I think the new website has many good additions, in particular the package documentation. However, I think that the old website looks better than the new website. Luckily, most of the problems as I see them can be easily changed. Also, in my criticism, I don’t wish to demean the excellent work of the people who created the website. It’s easier for me to point and criticize from the sidelines than for people to design and implement the website.
I think that the new website has too much empty space. On the old website, information is a lot more compact. The new website has a lot of padding, which feel gratuitous to me.
I don’t like the icons on the new website. I feel that they look similar to “stock icons” that one might grab online, which gives the website a cheap appearance. In particular, the Greek building symbol above “Work Joyfully” confuses me because its relevance to the feature being advertised is not clear. I think the new Rust website shares this flaw with the icons in the “Build it in Rust” section. In contrast, the Haskell website does not have icons. It could have used icons for each point in the “Features” section, but doesn’t. I think the Haskell website looks better than the OCaml and Rust websites.
I think the windows showing the example code have corners that are too rounded. I also dislike the red, yellow, and green circles in the corner, which are supposed to look like the OS X GUI, but don’t exactly match, which makes the code windows feel cheap to me. I think it’s best to remove any icons that try to imitate a GUI. For the two examples lower on the page, perhaps make the text copy and pasteable and link to the OCaml playground as well.
I want to know if other people agree with these points?
It’s less critical on “marketing” pages, but I agree that’s an issue on several (if not most) pages.
I don’t like the icons on the new website.
I guess it depends on which icons, but I would agree with you about the icons on the feature section on the homepage.
I actually think the homepage would deserve a thorough redesign: half the page is acting as a portal to other parts of the site at the moment, with some illustrations, which doesn’t seem particularly useful. Instead, it would be great to have a landing page that provides a more graphical version of Why OCaml?. This was the intention of the first two sections, but I think it could do a better job of telling readers why they should consider using OCaml.
I think the windows showing the example code have corners that are too rounded. I also dislike the red, yellow, and green circles in the corner, which are supposed to look like the OS X GUI, but don’t exactly match, which makes the code windows feel cheap to me. I think it’s best to remove any icons that try to imitate a GUI.
Ok, I don’t agree with all your points, I actually like the code sample on the homepage
It’s possible it would look better with another radius and a different frame though.
For the two examples lower on the page, perhaps make the text copy and pasteable and link to the OCaml playground as well.
Which examples are you referring to?
As a side note for everyone: don’t hesitate to open issues on the GitHub repository to share feedback like this!
This is extremely useful for the maintainers of the site to prioritise work, and it can be easy to miss it in a Discuss thread.
It is a little tricky to use Peertube with multiple users (still involves password sharing), so we’re figuring it out as we go along and before finishing the promotion to production status.
I’ve been running a Peertube instance with thousands of users for years and it never involved password sharing. I’m quite surprised to hear this… Also someone (IIRC it was you :D) recently created an account for me so I really don’t understand which issue you’re talking about.
Matrix chat is already sitting alongside the venerable IRC as an open alternative.
It’s nice to hear about all the existing/future decentralized services. In the other hand, IIUC, all these services hosting/maintainance seem to be handled by a single person and I feel it may goes against the idea and benefits of decentralized services.
To follow on from @tmattio’s comments, I also raised the same query during the design process. As far as I can figure, most modern websites are designed for mobile usage – a huge percentage of users now come in via tablet/mobiles, and so responsive design is really important. That explains the medium/small layouts, but not why there is so much whitespace in the wider screen layouts. It’s really obvious when comparing an opam.ocaml.org package description with the equivalent on ocaml.org/p. The next iteration of design is beginning now, so please do help out with the survey and your thoughts that @sabine just posted about, and we can get those fixed.
The problem is pretty simple: PeerTube doesn’t support shared video channels, so one user has to own them. In our case, the intrepid bactrian returns for all the OCaml Workshop videos.
If we do create separate users, then it also looks weird. For example, @patricoferris uploaded OCaml Workshop 2022 videos under his own account, and now when you reference them from Mastodon it looks like you’re referring to @patrickferris since it drops the domain portion by default. See here for a ‘toot’ that is owned by the bot, and here one that is harder to distinguish.
And then… I did indeed create you an oups@watch.ocaml.org account in April, but you haven’t uploaded anything since. What do we do if someone else in the future wants to take over OUPS videos and get them online? You need to account share. Most of the ActivityPub services like Mastodon and PeerTube are quite user-centric at the moment, and not well suited to shared publishing. But it works well enough with a little coordination amongst ourselves and some trust. Concrete suggestions for improvement welcome.
Using decentralisation has got nothing to do with one hosted instance. By using these protocols, the information related to OCaml can be replicated across multiple sites and reconstructed if one service goes down. For example, my personal crank.recoil.org instance “follows” and mirrors the videos on watch.ocaml.org, as do around 50 other PeerTube instances. So the ocaml.org domain is most valuable as a namespace, where it can aggregate and publish information that is actually generated elsewhere. In an ideal world, the ACM SIGPLAN team would publish their videos on PeerTube as well as YouTube, and ocaml.org would be a bookmarking/mirroring service.
While this is the theory, in practise the ActivityPub protocol is very URL-centric and so makes it hard to recover from federated domains disappearing. You can read more about this in an excellent undergraduate project last year by Gediminas Lelešius on Improving the Resilience of ActivityPub Services.
As for your point about hosting being centralised; I’m not the only maintainer. There are around a dozen maintainers spread across the ocurrent and opam/dune/ocaml orgs that keep everything ticking along. Some of the core machines do indeed only have a couple of people with access, but this is for obvious security reasons. No service has only 1 person with access, so we have a reasonable “bus factor”.
As @jbeckfordobserved in another thread, we are in definite need of more maintainers throughout the OCaml ecosystem. For infrastructure, the best way to get involved is by help to scope out technologies (like Mobilizon or SourceHut, in this thread), or by contributing to the software stacks behind it (like the various CIs listed in the ocurrent site). My personal hope is that someone will start building complete ActivityPub bindings in OCaml so we can start having some Fediverse fun in our own language
I’m specifically curious about the infrastructure hosting *.ocaml.org services. Who owns the servers ? Who have access to them ? The only piece of information I could find was the governance page. What I understood from there was that maintainers don’t get access to the servers. Then, there’s only the delegates left and currently there’s only one. Did I get it wrong ? Is this documented somewhere more precisely ?
or SourceHut, in this thread
FYI, for sourcehut related things, we have @emersion here who’s one of the authors. I’m sure he would be glad to answer any question to help getting an instance on ocaml.org. I created an issue for that.
It’s pretty easy to use the same perspective to say that all but the largest hyperscale operators represent non-“negligible” shares of global compute carbon output, but I don’t think we want to have a set of criteria that leads to just a handful of organizations caring enough to shrink said output.
But to put a different flavour on this topic: it is specifically @avsm et al.'s focus on carbon footprint and his past writings on the subject that motivated me to put tangible effort into doing likewise for the projects and deployments I have responsibility over, and insofar as I can, I’ll do the same kind of advocacy in other contexts too. So while a narrow cost-benefit analysis might yield an impression that such work has only local and minor impact, don’t underestimate the second-, third-, and nth-order effects of others seeing someone pursuing such work around their own negligible carbon footprint.
Or find a way so it doesn’t have to do the work at all . E.g. I found the memtrace support in OCaml 4.11+ quite useful in this regard to pinpoint high allocation rates in code that shouldn’t have been there in the first place (it kept allocating and deallocating a 64k buffer every time, that could’ve been completely avoided)
Indeed, this is an often overlooked aspect. For example to allow one of your CPU cores to use the higher turbo frequencies some of the other cores have to save energy and enter some deeper C states, otherwise most of the time there simply won’t be enough thermal headroom to achieve turbo frequencies (or not for a long time).
And although I have no particular opinion on the debate about OCaml’s carbon footprint, I can quite easily get behind the caching and performance aspects of it. If I can get faster code, faster builds, faster CI runs (that cache and reuse previous similar runs), and if a laptop would consume less power and thus its battery would last longer everyone wins.
(I think the ‘powertop’ project found that it is more efficient to wake a CPU core up to full speed, so it finishes its work as quickly as possible, i.e. to minimize time spent outside of deep C states, rather than run at low frequency, taking a longer time to finish its work, spend longer out of deep C states and in the end consume more power. Obviously doesn’t apply to continuously running tasks)
If that ultimately helps reduce the carbon footprint too, that is an added bonus!
And on that note I’ve been using dune’s cache feature for quite a while now, and at least in my projects I rarely found bugs in it. The only problem is that it will eventually eat up all your disk space (over several months) even if you have hundreds of gigabytes, so running dune cache trim every now and then is a necessity.
It is very easy to turn on, the following (in your shell’s profile for example) will turn on dune caching both for opam builds and regular builds (perhaps counter-intuitively turning on caching in ~/.config/dune/config will be invisible to opam builds because that file is not visible in the opam build sandbox, but the env var is).
export DUNE_CACHE=enabled
# if you have a particularly unusual partition layout, or you use sandboxing you might need this too (use opam install <package> --verbose to see whether you have cache misses with `EXDEV` error, if you do then you need this):
export DUNE_CACHE_STORAGE_MODE=copy
Removing and reinstalling the same package should take seconds now (with most of the time spent in the opam solver), and this is quite handy when installing a package that suddenly wants to rebuild half of the installed package universe.
What would be the best place to collect ideas/tips like these?
(ideally opam, dune and its sandbox could cooperate better so that this all works seamlessly by default, but meanwhile a few config line entries that speed up most of my builds have been very useful for me, and I’m not sure how widely known that setup is)