I would actually be happy enough if they land the ability to name the template files with extensions like .ml.html, so that my editor thinks it’s a normal HTML file. That should cover about 80% of the cases for like 20% of the effort. EDIT: actually, I just realized I had misunderstood @antron 's comment and this is something we can do right now–nice! dream/example/w-template-files at master · aantron/dream · GitHub
Yeah, that gets you highlighting, but nothing else unfortunately. I’m not sure I could be very productive with either no code completion/navigation in templating files, or by structuring them so that they’re effectively code-free.
surely you are joking, but IMO ‘heavy’ is for the LOC underpinning a given functionality. Both own code, 3rdparty and transitive. I avoid frameworks and rather strip things down to near trivial designs and scale towards n=1 in terms of users & dynamic requests. So I prefer web via raw data from static files or CGIs rendered via XSLT (clientside templating, no JS). The CGIs work similar UNIX text filters. http://demo.mro.name/geohash.cgi. Works wonders for small things and would be interesting how far it could get you.
For templating I found that Jinja2 style templates and a component-based approach with TyXML work equally well. I share the views of @cemerick and mostly use .re for now.
When using HTMX, URL management is also my biggest pain point. I found that it is not actually the stringly typing that is annoying, but the ad-hoc building of URL paths all over the place. One thing that helped was to explicitly name the URLs.
What I meant by “stringly unchecked dependencies” is your code making assumptions about URL structure in different places of your codebase under the form of strings (e.g repeated formatting of the form "/path/to/record/%d"). This means that when you start making changes to your URL structure you application silently breaks and that’s acctually immensely annoying because it’s called a bug.
I don’t think that this problem is peculiar to htmx, binding your data structures and actions to URLs (or rather URL requests) is a problem with the web in general that you also get when you devise or interact with rest web services.
Now most of these “router” abstractions and web frameworks they never seem to care about this problem, they only decode URLs, they don’t encode them (AFAIR when I researched the subject again a few years ago, eliom and Haskell’s yesod being notable exceptions). That’s what this Kurl thing I linked earlier (which also needs a few more design rounds) tries to solve. By basically encouraging you to move away from the web’s broken data structures which most web frameworks out there seem keen to push on you all over your code base. No “ad-hoc building of URL paths all over the place”.
Regarding the discussion about templating, I’m always surprised by people liking to mix the terribly verbose HTML syntax with their programming language when you can get the excellent definitional conciseness and understanding of few functional combinators. Besides if you need to interact with a designer you’re likely better off with logic-less templates fragment that you compose in your code, rather than this mumbo-jumbo of HTML and ml code which your designer will keep on breaking.
I also remain quite astonished by tyxml’s popularity which in my opinion has terrible type and refactoring usability and which, despite what the advertisement says, won’t guarantee that your pages are going to be valid (how are you going to guarantee that your ids are unique ?). I did try the approach at the beginning of the century when the technique came out before deciding that this was putting lipstick on a pig and prefered to treat the media as an untyped assembly language. I have never looked back.
…and still brag about separation of content and style – immediately after (automatically thinking only of html/css). As if they wouldn’t put style into the markup (e.g. bootstrap classes) and so taint the content. Web devs are a weird bunch.
The reason for mixing HTML and programming languages is that it’s incredibly painful to rebuild chunks of the DOM programmatically. Of course, had the web had proper widgets, this wouldn’t be nearly as difficult. Instead we’re rebuilding everything again and again with custom code.
It’s not IMO (leaving aside the value [or not] of tyxml’s model), which is why I’m using the JSX option.
Combinators and all the various permutations of transliterated HTML that could exist are very rarely more concise than the HTML they represent or generate. Likewise, those other options are hardly more “understandable” than HTML itself, the structure of which is generally plainly evident in a form like JSX (compared to having to mentally interpret e.g. combinators).
I’ve rarely had the pleasure of working with a designer, so that’s not really a factor for me. My aim is to be able to write and sometimes capture HTML (e.g. copy/paste bits or even larger blobs like those from “component” libraries, plug it into whatever display logics are necessary, and then manipulate it as time passes with a minimum of fuss. I spent a decade+ with lisps and haskell using probably a dozen different HTML transliterations and combinator options; in contrast, the experience of using something like JSX is more efficient and pleasant on every score.
Logic-free templates (like mustache, I guess?) are the worst of both worlds IMO: you get dull programming tools within and around a heavily-tainted HTML representation.
Yeah, the same can be said of the lack of direction provided in conjuring up the DOM identifiers (both #ids and other CSS selectors needed for swap selection and such). I think there’s a big open space waiting for the development of “component” contexts when writing htmx frontends that would reliably generate both routing structure and DOM identifiers.
To eliminate at least some of this pain, I’ve taken to pushing 90% of htmx traffic through its websocket channel, and then disambiguating actions based on the opaque "name" that each “request” payload carries. Yes, I still need to pick those names, but they are application-specific, and can be anything (even auto-generated based on app entities, etc), so the burdens of URL etiquette are avoided.
Not trying to convince you but I’d just mention that while they may not be much more concise, they are natural in the language you work with and I personally don’t find interpreting El.p […] as <p>…</p>especially challenging.
More importantly perhaps is that by sticking to one language you also avoid the mess of quotations and antiquotations (which I find mentally more challenging) and trivially solve your editor support problem.
Heh, yeah, we’re deep in the weeds of ergonomic concerns, where it’s rightly impossible to convince anyone of anything
It’s not about the interpretation being challenging, but that it’s necessary at all. Sure, the structure of the simplest stuff is evident. But even pretty trivial examples of real-world HTML get pretty hairy IMO. For example, this:
Yes, the latter is OCaml, which I really enjoy, but a misapplication of it IMO. Likewise, writing JSON using Yojson notation or OCaml using only its AST is just as unnatural vs. trading in the actual medium in question.
Yep. Every language I know of is much harder to handle mentally as AST than as syntax (except perhaps lisp, where AST = syntax). And that’s coming from someone who wrote pure AST for around 3 years.
I’m hard-pressed to think of a designer who would prefer to work with templates composed in an OCaml syntax, rather than in something as close to HTML as possible. We have quite a lot of flexibility with Dream, for example, because of its approach to templating. We can split up every template or partial into a separate file, with only a single function header at the top of the file being the most prominent piece of OCaml code, and the rest of the file being regular old HTML markup with some spliced content mixed in, a strategy most designers should be fairly comfortable with by now.
But you get something close to the original HTML if it was created by UI dev, or designer. You may even get them to edit the template if a small tweak is needed.
Is the only way to integrate it with still still to define one rule per eml file?
Yeah, and that’s a good point. The dune rules are a bit clumsy to set up and would be annoying to keep track of properly the more fragment files were created. If the eml processor could somehow find all the eml files by itself and process all of them, that would be great. If anyone knows of a better alternative, I’m interested to hear it.
EDIT: or actually, it would be better if we could define generic rules in dune, like e.g.
I agree that the raw translation is hardly better (worse even) than the original HTML, but the I find the benefit of using tyxml combinators is the ability to build your own combinators ontop of the ones provided by the core library.
Consider:
module HTML = Tyxml_html
let a_cls elts = HTML.a_class elts
let div ?(a_class=[]) ?(a=[]) elts = HTML.div ~a:(a_cls a_class :: a) elts
let p txt = HTML.p [HTML.txt txt]
let medium_txt txt = HTML.h3 ~a:[a_cls ["font-medium"]] [HTML.txt txt]
let card_container elts =
div ~a_class:["flex"; "flex-col"; "rounded"; "shadow-sm"; "bg-white"; "overflow-hidden"] elts
let card_title txt =
div ~a_class:["py-4"; "px-5"; "lg:px-6"; "w-full"; "bg-Gray-50"] [medium_txt txt]
let card_footer txt =
div ~a_class:["py-4"; "px-5"; "lg:px-6"; "w-full"; "text-sm"; "text-gray-600"; "bg-gray-50"]
[p txt]
let card () =
card_container [
card_title "Card Title";
card_footer "Card Footer"
]