← Back to context

Comment by kristopolous

5 days ago

What probably needs to exist is something like `llsed`.

The invocation would be like this

    llsed --host 0.0.0.0 --port 8080 --map_file claude_to_openai.json --server https://openrouter.ai/api

Where the json has something like

    { tag: ... from: ..., to: ..., params: ..., pre: ..., post: ...}

So if one call is two, you can call multiple in the pre or post or rearrange things accordingly.

This sounds like the proper separation of concerns here... probably

The pre/post should probably be json-rpc that get lazy loaded.

Writing that now. Let's do this: https://github.com/day50-dev/llsed

Some unsolicited advice: Streaming support is tricky. I'd strip the streaming out when you proxy until everything else is solid.

  • Cool. Sounds good. Thanks. I'll do it.

    This will be a bit challenging I'm sure but I agree, litellm and friends do too many things and take too long to get simple asks from

    I've been pitching this suite I'm building as "GNU coreutils for the LLM era"

    It's not sticking and nobody is hyped by it.

    I don't know if I should keep going or if this is my same old pattern cropping up again of things I really really like but just kinda me

    • So I've pitched this a few more times. It's way too complicated for people.

      The value comprehension market is small

      So I'll need to surface it better or just do something else