Common Lisp SDK for the Datastar Hypermedia Framework

4 days ago (github.com)

This is my attempted at something that makes using Common Lisp with Datastar easier. To test the SDK I made this demo that shows the simulation of the Cassini-Huygens mission using the NASA SPICE toolkit and JPL Horizons API: https://dataspice.interlaye.red/

The Datastar API itself is very simple, 3 functions or so, I ended up wasting a lot more time on stuff like leeping the SSE stream open, compression support (zstd only atm), and trying to use CLOS in a way that would fit both Hunchentoot and Clack (not always easy).

  • Very nice, thank you. The tests directory is good for testing, and I suggest adding an examples directory with a few very short and complete simple examples.

    • Thanks! Yes, I think I will add the Horizons JPL demo there, it's essentially a 20 line file.

> Each SSE connection blocks one worker for its entire duration.

Have you tried wookie? Such extreme case of blocking the event loop... negates any benefit of async processing.

  • An update: I've spent some time taking a much deeper look, and while I can't guarantee it's perfect, I added a different approach for Clack+Woo, documented here: https://github.com/fsmunoz/datastar-cl/blob/main/SSE-AND-WOO...

    In short: I've replace the Common Lisp loop (that works for Hunchentoot since it opens threads, but doesn't for Woo since it blocks) with a deeper integration into the event loop:

    > And that was the main change: looking at the innards of it, there are some features available, like woo.ev:evloop. This was not enough, and access to the libev timer was also needed. After some work with lev and CFFI, the SDK now implements a Node.js-style approach using libev timers via woo.ev:evloop and the lev CFFI bindings (check woo-async.lisp).

    This is likely (almost surely) not perfect or even ideal, but it does seem to work, and I've been testing the demo app with 1 worker and multiple clients.

  • I haven't tried Wookie, since adding Clack+Woo was already a substantial change. Reading https://fukamachi.hashnode.dev/woo-a-high-performance-common... , where it compares with Wookie, I'm not sure if it would make a difference: it might be wrong, but "it says:

    > Of course, this architecture also has its drawbacks as it works in a single thread, which means only one process can be executed at a time. When a response is being sent to one client, it is not possible to read another client's request.

    ... which for SSE seems to be similar to what the issue is with Woo. I wrote a bit more on it in https://github.com/fsmunoz/datastar-cl/blob/main/SSE-WOO-LIM... , and it can be more of a "me" problem than anything else, but to keep a SSE stream open, it doesn't play well with async models. That's why I added a with-sse-response macro that, unlike with-sse-connection, sends events without keeping the connection open.

    • wookie is built on cl-async, so my hope is that it's more tractable to write proper async SSE handler. But I haven't looked at whether it's possible to keep open connection asyncly.

Thanks for sharing. I’m curious why the example SPICE application uses Fortran to parse the SPICE data?

  • The CL-SPICE library I used, that wraps the SPICE C library through CFFI, doesn't cover the type of SPICE kernel that I wanted to use for the Comms module. I could try and add it, but it could be more involved than what I expected and put the thing on hold.

    So I used the FORTRAN SDK for SPICE, since I had used it before, and it's reasonably small and easy. The alternative coud be using the C SDK, but I went with FORTRAN since I already had most of the code from a previous project.