Comment by dagss
2 days ago
Nice to see HTTP API for consuming events.
I wish there was a standard protocol for consuming event logs, and that all the client side tooling for processing them didn't care what server was there.
I was part of making this:
https://github.com/vippsas/feedapi-spec
https://github.com/vippsas/feedapi-spec/blob/main/SPEC.md
I hope some day there will be a widespread standard that looks something like this.
An ecosystem building on Kafka clients libraries with various non-Kafka servers would work fine too, but we didn't figure out how to easily do that.
This resonates a lot.
I’d love a world where “consume an event log” is a standard protocol and client-side tooling doesn’t care which broker is behind it.
Feed API is very close to the mental model I’d want: stable offsets, paging, resumability, and explicit semantics over HTTP. Ayder’s current wedge is keeping the surface area minimal and obvious (curl-first), but long-term I’d much rather converge toward a shared model than invent yet another bespoke API.
If you’re open to it, I’d be very curious what parts of Feed API were hardest to standardize in practice and where you felt the tradeoffs landed in real systems.
I don't have that much to offer... we just implemented it for a few different backends sitting on top of SQL. The concept works (obviously as there is not much there). The main challenge was getting safe export mechanisms from SQL, i.e. a column in tables you can safely use as cursor. The complexity in achieving that was our only problem really.
But because there wasn't any official spec it was a topic of bikeshedding organizationally. That would have been avoided by having more mature client libs and spec provided externally..
This spec is I a bit complex but it is complexity that is needed to support a wide range of backend/database technologies.. Simpler specs are possible by making more assumptions/hardcoding of how backend/DB works.
It has been a few years since I worked with this, but reading it again now I still like it in this version. (This spec was the 2nd iteration.)
The partition splitting etc was a nice idea that wasn't actually implemented/needed in the end. I just felt it was important that it was in the protocol at the time.
That makes a lot of sense the hard part isn’t “HTTP paging”, it’s defining a safe cursor (in SQL that becomes “which column is actually stable/monotonic”), and without an external spec/libs it turns into bikeshedding. In Ayder the cursor is an explicit per-partition log offset, so resumability/paging is inherent, which is why Feed API’s mental model resonates a lot. I’d love to see a minimal “event log profile” of that spec someday.