Hi,
I also got an A-star tutorial done with ChatGPT.
I think the result is quite good (at least for what I personally expect from an A-star tutorial in OCaml), and I have to admit that the result is better than what I would be able to produce alone with my modest skills.
Translations in other laguages are also possible to be produced with minimal efforts.
The first implementation produced by ChatGPT was not that good. In order to get a better result, I requested an Haskell implementation, and then asked to translate it into ocaml, which gave a better result.
I also got code for OpenGL’s half floats done with ChatGPT (floats stored with 16-bits), which I also failed to do alone in the past (but I need further testing before to share it).
I also tryed ChatGPT with ReScript, the results are very poor compared to what it can produce with OCaml.
I also added some new chapters in the ocaml tutorial, all this documentation was generated with ChatGPT, and I only had to make 2 minor corrections in it. I personnally think that the result is not that bad. This is of course not as good as D. Baturin’s book.
Firstly, you have lots of descriptive English in there. If you feed it into a text-to-image model like Flux.1 Schnell you’ll get loads of beautiful artwork to render alongside your literature.
Secondly, Alibaba released Qwen2.5-coder:32b yesterday which I have found to be great at generating code including OCaml code and it runs locally on your machine (I use Ollama). Maybe that will help you generate more content faster? You can change the system prompt to include things like the signatures of modules you commonly use (I use Unix, Lwt, Cohttp, Uri, Yojson, Ezxmlm and Soup) so it tends to produce the style of OCaml you love.
Thirdly, with AIs you can generate content not only in many programming languages but also in many natural languages so you can cater for all combinations.
Fourthly, using a library like MDX you can make sure the generated OCaml code type checks and passes tests.
I’m sorry there are not english translations for all yet.
I also tried an ocaml with cleopatra, where her scribes use a magical abacus with ocaml to create cartography of egypt, in order to find the best path to bring the stones to build the temples, but I was not completely happy with the result yet. The ones above are also not perfect, but I’m happy with it. They can probably be improved, but I think they are already nice to read.
For now it’s only something nice to read (for those who like these kind of things). Making it a real tutorial that teaches a language, included into a real story is more challenging.
For what it’s worth, I only use ChatGPT and its ilk when I’m dealing with a hairy or unique problem where it’s not immediately obvious how to proceed and I’ve already spent a good chunk of time thinking about solutions. In my experience, all LLMs are pretty bad once you venture off the beaten path and try to start using them as a research tool. They basically excel at ideation and pointing you in possible directions to try on your own. Their “proofs of correctness,” etc., are more often than not incorrect if you’re trying to derive something that cannot be found online.
On the other hand, I find that generated “garden variety” OCaml is “good enough” (just as the generated Python is “good enough”).
Perhaps I’ve been using them wrong all along. Are people using them for mostly grunt work? I have to admit that my experience so far has made me very wary of trusting this output blindly, so this would seem to rule out even mindless work.