[ANN] Draft of OCaml Scientific Computing book

Liang @ryanrhymes and I just finished the first draft of our book - OCaml Scientific Computing ! You can read the draft here: https://lnkd.in/dSE6hEg

This book is a summary of our long-term dedication to functional programming and numerical computing. While the entrance bar to data science, ML, and AI becomes lower and lower thanks to the fast development of various powerful frameworks and toolkits. The tool itself remains a black box and mysterious to many data scientists in reality.

This book gives you a very different angle to look at data science, by illustrating how we, hardcore computer scientists and engineering computing experts, build up a high performance numerical system from scratch. In some sense, the book will help you to find the missing link between a basic pseudo random number generator to a fancy deep neural network application.

This book is not for those who just want to “cast the spell”, but rather for those who want to “make the magic more magic” :slight_smile:

Original Post on LinkedIn

An online version of this book can be seen at: https://ocaml.xyz/book/

49 Likes

This is a remarkable milestone, and I’m greatly enjoying reading through the book – congratulations! How would you like feedback sent? I’m just scribbling notes for myself at the moment, so I can structure it as you prefer in a few weeks when I finish reading.

5 Likes

Congrats!

Design of your web site strongly reminds me the design of Real World OCaml’s one. Is it the same design? Can I reuse it for my book?

1 Like

Do drop me an email on avsm2@cam.ac.uk if you’d like to use the design. It’s freely available, but we’re also steadily iterating on it to remove a lot of the custom generation and upstream the changes to various Platform tools. If I know who is using it, it’s easier to manage the various forks as we make changes to Real World OCaml in preparation for our upcoming v2 release.

1 Like

This looks like a great resource even for those not necessarily engaged in “data science”. Thank you for working on it!

1 Like

This is a great accomplishment, congrats!

2 Likes

Congratulations! In Prologue, you mention “I intentionally avoid looking into the architecture of SciPy, Julia, Matlab to minimise their influence on Owl’s architecture, I really do not want yet another xyz …” which I highly appreciate but I’m curious if now, when Owl stabilised, is there any comparison of it’s architecture and features with other systems?

1 Like

I intentionally avoid looking into the architecture of SciPy, Julia, Matlab to minimise their influence on Owl’s architecture, I really do not want yet another xyz

This is really interesting approach, helps to innovate. Amazing work!

1 Like

Features: unfortunately, because of manpower, owl will seriously lag behind scipy and scikit-learn…
And, note that I would love to be able to do machine learning using ocaml only…
But, I know the size of the community behind those two python projects and don’t think we will be able to compete.

2 Likes

It’s not only about the 1:1 feature size. For example, FluxML umbrella project of Julia world is not so featureful as Python world, but it offers the ability to differentiate almost any Julia code, the thing that Python frameworks will never be able to do.

2 Likes

As @XVilka pointed out to us, now Julia’s, python’s and R’s ecosystems have joined forces in SciML, which is integrating each library into the others. So there will inevitably be a fast growing gap.

I don’t know if and when we will be able to do numerical work in OCaml only, maybe never, but the slowly growing community and increase in bug reports and feature requests from many of you is a good sign. After all (Julia aside) it took a long time for the other frameworks to become dominant and take over matlab.

I wonder how much of it boils down to create hype, advocate and really push resources to grow the community and advertise the framework. In any case, I think this book is a big step in the right direction.

2 Likes

Thanks Anil! Since currently the book is managed using GitHub (https://github.com/owlbarn/book), I think firing up issues there would be a good way to do so. Of course, as long as it is convenient for you, any other way is OK, such as email to the authors or the Owl slack channel etc. :slight_smile:

2 Likes

Thanks for the appreciation! Currently we don’t have a systematic comparison between Owl and NumPy etc. in the book, and it is a great idea to add that when Owl matures in the future. Thanks!

1 Like

There’s also another matter, which is cross-pollination of projects as a needed ingredient for growth. Owl has essentially had to implement numpy in OCaml for high numerical performance. The side effect of that is that now OCaml has high performance vectorized tensors available for other tasks. If you want to use Bigarrays as tensors, for example to create a physics simulation or a game engine, the best way to do that now is to use Owl’s extension of Bigarrays. Even if Owl isn’t used in industry for machine learning (which is advancing very rapidly), the libraries it had to build along the way are useful to many other projects. The more kinds of libraries get fleshed out, the better the ecosystem as a whole becomes, and the more attractive OCaml is to everyone.

8 Likes

Logistic regression should be put in a chapter about classification, not on the chapter about regression. “Logistic regression” is badly named, it is a technique for classification.
Classification and regression are related but different, it is confusing to see them under the same chapter.
Either you do regression, or you do classification. You don’t do both at the same time.
Regression is harder and sometimes you cannot do it on a dataset, while training a classifier is still possible (because it is easier).

I don’t know if you speak about bagging in the book (Breiman); this could go in the classification chapter. [Breiman, L. (1996). Bagging predictors. Machine learning , 24 (2), 123-140.]. A very powerful technique to turn a weak classifier into a more powerful one (also it can counter class imbalance), and very easy to parallelize.

Useful book for sure, good luck.

3 Likes

Feature normalization is usually done using:
x_norm <- (x - mean(x)) / stddev(x)
I.e. compute a z-score.
I think using min(x) and max(x) is pretty dangerous if the dataset has outliers which are just noise / erroneous data points.

3 Likes

I think this is a matter of perspective, depending on whether readers come from a machine learning or statistics background. In statistics books, linear and logistic regression are usually treated as having similar objectives but one is appropriate when the response variable is continuous, the other when it is discrete. See for example Wasserman’s book ‘All of Statistics’, which has a a chapter on `Linear and Logistic Regression’.

4 Likes

Thanks for the helpful information and feedback! In drafting this chapter, I mainly follows the structure of the “Machine Learning” course by Andrew Ng. During the revising, I would consider reconstructing the material according to your suggestion. Thanks!

1 Like